2025-06-03 14:47:58.563780 | Job console starting 2025-06-03 14:47:58.581221 | Updating git repos 2025-06-03 14:47:58.652703 | Cloning repos into workspace 2025-06-03 14:47:58.834299 | Restoring repo states 2025-06-03 14:47:58.852270 | Merging changes 2025-06-03 14:47:58.852289 | Checking out repos 2025-06-03 14:47:59.202266 | Preparing playbooks 2025-06-03 14:47:59.953807 | Running Ansible setup 2025-06-03 14:48:05.802392 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2025-06-03 14:48:06.991965 | 2025-06-03 14:48:06.992094 | PLAY [Base pre] 2025-06-03 14:48:07.026244 | 2025-06-03 14:48:07.026363 | TASK [Setup log path fact] 2025-06-03 14:48:07.055383 | orchestrator | ok 2025-06-03 14:48:07.088336 | 2025-06-03 14:48:07.088473 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-06-03 14:48:07.137796 | orchestrator | ok 2025-06-03 14:48:07.156188 | 2025-06-03 14:48:07.156300 | TASK [emit-job-header : Print job information] 2025-06-03 14:48:07.195055 | # Job Information 2025-06-03 14:48:07.195203 | Ansible Version: 2.16.14 2025-06-03 14:48:07.195237 | Job: testbed-deploy-in-a-nutshell-ubuntu-24.04 2025-06-03 14:48:07.195270 | Pipeline: post 2025-06-03 14:48:07.195293 | Executor: 521e9411259a 2025-06-03 14:48:07.195313 | Triggered by: https://github.com/osism/testbed/commit/9f82f19b799f2ff9349291af6591d1746288844d 2025-06-03 14:48:07.195335 | Event ID: b7ae53d0-4089-11f0-8d98-52bda40b9148 2025-06-03 14:48:07.202140 | 2025-06-03 14:48:07.202237 | LOOP [emit-job-header : Print node information] 2025-06-03 14:48:07.356587 | orchestrator | ok: 2025-06-03 14:48:07.356747 | orchestrator | # Node Information 2025-06-03 14:48:07.356792 | orchestrator | Inventory Hostname: orchestrator 2025-06-03 14:48:07.356823 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2025-06-03 14:48:07.356850 | orchestrator | Username: zuul-testbed02 2025-06-03 14:48:07.356877 | orchestrator | Distro: Debian 12.11 2025-06-03 14:48:07.356930 | orchestrator | Provider: static-testbed 2025-06-03 14:48:07.360100 | orchestrator | Region: 2025-06-03 14:48:07.360139 | orchestrator | Label: testbed-orchestrator 2025-06-03 14:48:07.360161 | orchestrator | Product Name: OpenStack Nova 2025-06-03 14:48:07.360182 | orchestrator | Interface IP: 81.163.193.140 2025-06-03 14:48:07.378039 | 2025-06-03 14:48:07.378148 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2025-06-03 14:48:08.140286 | orchestrator -> localhost | changed 2025-06-03 14:48:08.149744 | 2025-06-03 14:48:08.149845 | TASK [log-inventory : Copy ansible inventory to logs dir] 2025-06-03 14:48:09.236999 | orchestrator -> localhost | changed 2025-06-03 14:48:09.252948 | 2025-06-03 14:48:09.253051 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2025-06-03 14:48:09.510391 | orchestrator -> localhost | ok 2025-06-03 14:48:09.517170 | 2025-06-03 14:48:09.517279 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2025-06-03 14:48:09.545581 | orchestrator | ok 2025-06-03 14:48:09.561133 | orchestrator | included: /var/lib/zuul/builds/f4646c709e2e4f68ab8142ce5be2de26/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2025-06-03 14:48:09.581521 | 2025-06-03 14:48:09.581624 | TASK [add-build-sshkey : Create Temp SSH key] 2025-06-03 14:48:10.906885 | orchestrator -> localhost | Generating public/private rsa key pair. 2025-06-03 14:48:10.907131 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/f4646c709e2e4f68ab8142ce5be2de26/work/f4646c709e2e4f68ab8142ce5be2de26_id_rsa 2025-06-03 14:48:10.907174 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/f4646c709e2e4f68ab8142ce5be2de26/work/f4646c709e2e4f68ab8142ce5be2de26_id_rsa.pub 2025-06-03 14:48:10.907204 | orchestrator -> localhost | The key fingerprint is: 2025-06-03 14:48:10.907233 | orchestrator -> localhost | SHA256:iMpm5DpWA8Hd631dH9UJVyVPPOYC6wgeAfWZHrJHw/c zuul-build-sshkey 2025-06-03 14:48:10.907258 | orchestrator -> localhost | The key's randomart image is: 2025-06-03 14:48:10.907291 | orchestrator -> localhost | +---[RSA 3072]----+ 2025-06-03 14:48:10.907315 | orchestrator -> localhost | |. . ..o. .ooO| 2025-06-03 14:48:10.907340 | orchestrator -> localhost | | o . . .o o. .B+| 2025-06-03 14:48:10.907362 | orchestrator -> localhost | | . ...O .o o +| 2025-06-03 14:48:10.907383 | orchestrator -> localhost | | . o += +..o o | 2025-06-03 14:48:10.907405 | orchestrator -> localhost | | o o +.So+ .Eo .| 2025-06-03 14:48:10.907432 | orchestrator -> localhost | | + + . o.o o . | 2025-06-03 14:48:10.907454 | orchestrator -> localhost | | B . . | 2025-06-03 14:48:10.907475 | orchestrator -> localhost | |.= | 2025-06-03 14:48:10.907497 | orchestrator -> localhost | |o. | 2025-06-03 14:48:10.907519 | orchestrator -> localhost | +----[SHA256]-----+ 2025-06-03 14:48:10.907576 | orchestrator -> localhost | ok: Runtime: 0:00:00.835103 2025-06-03 14:48:10.914694 | 2025-06-03 14:48:10.914796 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2025-06-03 14:48:10.973162 | orchestrator | ok 2025-06-03 14:48:10.982711 | orchestrator | included: /var/lib/zuul/builds/f4646c709e2e4f68ab8142ce5be2de26/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2025-06-03 14:48:11.025941 | 2025-06-03 14:48:11.026058 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2025-06-03 14:48:11.039722 | orchestrator | skipping: Conditional result was False 2025-06-03 14:48:11.047687 | 2025-06-03 14:48:11.047782 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2025-06-03 14:48:11.635689 | orchestrator | changed 2025-06-03 14:48:11.647018 | 2025-06-03 14:48:11.647145 | TASK [add-build-sshkey : Make sure user has a .ssh] 2025-06-03 14:48:11.957285 | orchestrator | ok 2025-06-03 14:48:11.967871 | 2025-06-03 14:48:11.968052 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2025-06-03 14:48:12.380789 | orchestrator | ok 2025-06-03 14:48:12.391582 | 2025-06-03 14:48:12.391718 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2025-06-03 14:48:12.831850 | orchestrator | ok 2025-06-03 14:48:12.844799 | 2025-06-03 14:48:12.844950 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2025-06-03 14:48:12.904241 | orchestrator | skipping: Conditional result was False 2025-06-03 14:48:12.912676 | 2025-06-03 14:48:12.912808 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2025-06-03 14:48:13.535259 | orchestrator -> localhost | changed 2025-06-03 14:48:13.549603 | 2025-06-03 14:48:13.549746 | TASK [add-build-sshkey : Add back temp key] 2025-06-03 14:48:13.965469 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/f4646c709e2e4f68ab8142ce5be2de26/work/f4646c709e2e4f68ab8142ce5be2de26_id_rsa (zuul-build-sshkey) 2025-06-03 14:48:13.966204 | orchestrator -> localhost | ok: Runtime: 0:00:00.012591 2025-06-03 14:48:13.982387 | 2025-06-03 14:48:13.982530 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2025-06-03 14:48:14.399578 | orchestrator | ok 2025-06-03 14:48:14.414255 | 2025-06-03 14:48:14.414432 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2025-06-03 14:48:14.453296 | orchestrator | skipping: Conditional result was False 2025-06-03 14:48:14.567755 | 2025-06-03 14:48:14.567890 | TASK [start-zuul-console : Start zuul_console daemon.] 2025-06-03 14:48:15.018638 | orchestrator | ok 2025-06-03 14:48:15.035989 | 2025-06-03 14:48:15.036131 | TASK [validate-host : Define zuul_info_dir fact] 2025-06-03 14:48:15.117603 | orchestrator | ok 2025-06-03 14:48:15.139467 | 2025-06-03 14:48:15.139623 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2025-06-03 14:48:15.693045 | orchestrator -> localhost | ok 2025-06-03 14:48:15.708455 | 2025-06-03 14:48:15.708588 | TASK [validate-host : Collect information about the host] 2025-06-03 14:48:16.955422 | orchestrator | ok 2025-06-03 14:48:16.981397 | 2025-06-03 14:48:16.981541 | TASK [validate-host : Sanitize hostname] 2025-06-03 14:48:17.113040 | orchestrator | ok 2025-06-03 14:48:17.163418 | 2025-06-03 14:48:17.163632 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2025-06-03 14:48:17.925561 | orchestrator -> localhost | changed 2025-06-03 14:48:17.933017 | 2025-06-03 14:48:17.933145 | TASK [validate-host : Collect information about zuul worker] 2025-06-03 14:48:18.377354 | orchestrator | ok 2025-06-03 14:48:18.383767 | 2025-06-03 14:48:18.383902 | TASK [validate-host : Write out all zuul information for each host] 2025-06-03 14:48:19.557709 | orchestrator -> localhost | changed 2025-06-03 14:48:19.573247 | 2025-06-03 14:48:19.573389 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2025-06-03 14:48:19.894066 | orchestrator | ok 2025-06-03 14:48:19.902039 | 2025-06-03 14:48:19.902175 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2025-06-03 14:48:55.696261 | orchestrator | changed: 2025-06-03 14:48:55.696498 | orchestrator | .d..t...... src/ 2025-06-03 14:48:55.696535 | orchestrator | .d..t...... src/github.com/ 2025-06-03 14:48:55.696560 | orchestrator | .d..t...... src/github.com/osism/ 2025-06-03 14:48:55.696582 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2025-06-03 14:48:55.696603 | orchestrator | RedHat.yml 2025-06-03 14:48:55.713459 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2025-06-03 14:48:55.713477 | orchestrator | RedHat.yml 2025-06-03 14:48:55.713529 | orchestrator | = 1.53.0"... 2025-06-03 14:49:11.044890 | orchestrator | 14:49:11.044 STDOUT terraform: - Finding hashicorp/local versions matching ">= 2.2.0"... 2025-06-03 14:49:11.135020 | orchestrator | 14:49:11.134 STDOUT terraform: - Finding latest version of hashicorp/null... 2025-06-03 14:49:12.224966 | orchestrator | 14:49:12.224 STDOUT terraform: - Installing hashicorp/null v3.2.4... 2025-06-03 14:49:13.076273 | orchestrator | 14:49:13.076 STDOUT terraform: - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2025-06-03 14:49:14.243453 | orchestrator | 14:49:14.243 STDOUT terraform: - Installing terraform-provider-openstack/openstack v3.1.0... 2025-06-03 14:49:15.612489 | orchestrator | 14:49:15.612 STDOUT terraform: - Installed terraform-provider-openstack/openstack v3.1.0 (signed, key ID 4F80527A391BEFD2) 2025-06-03 14:49:16.787868 | orchestrator | 14:49:16.787 STDOUT terraform: - Installing hashicorp/local v2.5.3... 2025-06-03 14:49:17.607190 | orchestrator | 14:49:17.607 STDOUT terraform: - Installed hashicorp/local v2.5.3 (signed, key ID 0C0AF313E5FD9F80) 2025-06-03 14:49:17.607289 | orchestrator | 14:49:17.607 STDOUT terraform: Providers are signed by their developers. 2025-06-03 14:49:17.607297 | orchestrator | 14:49:17.607 STDOUT terraform: If you'd like to know more about provider signing, you can read about it here: 2025-06-03 14:49:17.607342 | orchestrator | 14:49:17.607 STDOUT terraform: https://opentofu.org/docs/cli/plugins/signing/ 2025-06-03 14:49:17.607414 | orchestrator | 14:49:17.607 STDOUT terraform: OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2025-06-03 14:49:17.607469 | orchestrator | 14:49:17.607 STDOUT terraform: selections it made above. Include this file in your version control repository 2025-06-03 14:49:17.607518 | orchestrator | 14:49:17.607 STDOUT terraform: so that OpenTofu can guarantee to make the same selections by default when 2025-06-03 14:49:17.607540 | orchestrator | 14:49:17.607 STDOUT terraform: you run "tofu init" in the future. 2025-06-03 14:49:17.607987 | orchestrator | 14:49:17.607 STDOUT terraform: OpenTofu has been successfully initialized! 2025-06-03 14:49:17.608075 | orchestrator | 14:49:17.608 STDOUT terraform: You may now begin working with OpenTofu. Try running "tofu plan" to see 2025-06-03 14:49:17.608129 | orchestrator | 14:49:17.608 STDOUT terraform: any changes that are required for your infrastructure. All OpenTofu commands 2025-06-03 14:49:17.608137 | orchestrator | 14:49:17.608 STDOUT terraform: should now work. 2025-06-03 14:49:17.608184 | orchestrator | 14:49:17.608 STDOUT terraform: If you ever set or change modules or backend configuration for OpenTofu, 2025-06-03 14:49:17.608251 | orchestrator | 14:49:17.608 STDOUT terraform: rerun this command to reinitialize your working directory. If you forget, other 2025-06-03 14:49:17.608309 | orchestrator | 14:49:17.608 STDOUT terraform: commands will detect it and remind you to do so if necessary. 2025-06-03 14:49:18.709765 | orchestrator | 14:49:18.709 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed02/terraform` instead. 2025-06-03 14:49:18.944094 | orchestrator | 14:49:18.943 STDOUT terraform: Created and switched to workspace "ci"! 2025-06-03 14:49:18.944200 | orchestrator | 14:49:18.944 STDOUT terraform: You're now on a new, empty workspace. Workspaces isolate their state, 2025-06-03 14:49:18.944362 | orchestrator | 14:49:18.944 STDOUT terraform: so if you run "tofu plan" OpenTofu will not see any existing state 2025-06-03 14:49:18.944413 | orchestrator | 14:49:18.944 STDOUT terraform: for this configuration. 2025-06-03 14:49:19.186462 | orchestrator | 14:49:19.186 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed02/terraform` instead. 2025-06-03 14:49:19.305726 | orchestrator | 14:49:19.305 STDOUT terraform: ci.auto.tfvars 2025-06-03 14:49:19.528597 | orchestrator | 14:49:19.528 STDOUT terraform: default_custom.tf 2025-06-03 14:49:20.853439 | orchestrator | 14:49:20.853 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed02/terraform` instead. 2025-06-03 14:49:21.780393 | orchestrator | 14:49:21.780 STDOUT terraform: data.openstack_networking_network_v2.public: Reading... 2025-06-03 14:49:22.318462 | orchestrator | 14:49:22.318 STDOUT terraform: data.openstack_networking_network_v2.public: Read complete after 0s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2025-06-03 14:49:22.530886 | orchestrator | 14:49:22.530 STDOUT terraform: OpenTofu used the selected providers to generate the following execution 2025-06-03 14:49:22.530949 | orchestrator | 14:49:22.530 STDOUT terraform: plan. Resource actions are indicated with the following symbols: 2025-06-03 14:49:22.531038 | orchestrator | 14:49:22.530 STDOUT terraform:  + create 2025-06-03 14:49:22.531048 | orchestrator | 14:49:22.531 STDOUT terraform:  <= read (data resources) 2025-06-03 14:49:22.531079 | orchestrator | 14:49:22.531 STDOUT terraform: OpenTofu will perform the following actions: 2025-06-03 14:49:22.531205 | orchestrator | 14:49:22.531 STDOUT terraform:  # data.openstack_images_image_v2.image will be read during apply 2025-06-03 14:49:22.531247 | orchestrator | 14:49:22.531 STDOUT terraform:  # (config refers to values not yet known) 2025-06-03 14:49:22.531280 | orchestrator | 14:49:22.531 STDOUT terraform:  <= data "openstack_images_image_v2" "image" { 2025-06-03 14:49:22.531312 | orchestrator | 14:49:22.531 STDOUT terraform:  + checksum = (known after apply) 2025-06-03 14:49:22.531342 | orchestrator | 14:49:22.531 STDOUT terraform:  + created_at = (known after apply) 2025-06-03 14:49:22.531373 | orchestrator | 14:49:22.531 STDOUT terraform:  + file = (known after apply) 2025-06-03 14:49:22.531406 | orchestrator | 14:49:22.531 STDOUT terraform:  + id = (known after apply) 2025-06-03 14:49:22.531435 | orchestrator | 14:49:22.531 STDOUT terraform:  + metadata = (known after apply) 2025-06-03 14:49:22.531465 | orchestrator | 14:49:22.531 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-06-03 14:49:22.531494 | orchestrator | 14:49:22.531 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-06-03 14:49:22.531512 | orchestrator | 14:49:22.531 STDOUT terraform:  + most_recent = true 2025-06-03 14:49:22.531531 | orchestrator | 14:49:22.531 STDOUT terraform:  + name = (known after apply) 2025-06-03 14:49:22.531566 | orchestrator | 14:49:22.531 STDOUT terraform:  + protected = (known after apply) 2025-06-03 14:49:22.531627 | orchestrator | 14:49:22.531 STDOUT terraform:  + region = (known after apply) 2025-06-03 14:49:22.531660 | orchestrator | 14:49:22.531 STDOUT terraform:  + schema = (known after apply) 2025-06-03 14:49:22.531690 | orchestrator | 14:49:22.531 STDOUT terraform:  + size_bytes = (known after apply) 2025-06-03 14:49:22.531720 | orchestrator | 14:49:22.531 STDOUT terraform:  + tags = (known after apply) 2025-06-03 14:49:22.531750 | orchestrator | 14:49:22.531 STDOUT terraform:  + updated_at = (known after apply) 2025-06-03 14:49:22.531756 | orchestrator | 14:49:22.531 STDOUT terraform:  } 2025-06-03 14:49:22.531842 | orchestrator | 14:49:22.531 STDOUT terraform:  # data.openstack_images_image_v2.image_node will be read during apply 2025-06-03 14:49:22.531849 | orchestrator | 14:49:22.531 STDOUT terraform:  # (config refers to values not yet known) 2025-06-03 14:49:22.531895 | orchestrator | 14:49:22.531 STDOUT terraform:  <= data "openstack_images_image_v2" "image_node" { 2025-06-03 14:49:22.531921 | orchestrator | 14:49:22.531 STDOUT terraform:  + checksum = (known after apply) 2025-06-03 14:49:22.531949 | orchestrator | 14:49:22.531 STDOUT terraform:  + created_at = (known after apply) 2025-06-03 14:49:22.531983 | orchestrator | 14:49:22.531 STDOUT terraform:  + file = (known after apply) 2025-06-03 14:49:22.532012 | orchestrator | 14:49:22.531 STDOUT terraform:  + id = (known after apply) 2025-06-03 14:49:22.532041 | orchestrator | 14:49:22.532 STDOUT terraform:  + metadata = (known after apply) 2025-06-03 14:49:22.532069 | orchestrator | 14:49:22.532 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-06-03 14:49:22.532100 | orchestrator | 14:49:22.532 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-06-03 14:49:22.532117 | orchestrator | 14:49:22.532 STDOUT terraform:  + most_recent = true 2025-06-03 14:49:22.532146 | orchestrator | 14:49:22.532 STDOUT terraform:  + name = (known after apply) 2025-06-03 14:49:22.532175 | orchestrator | 14:49:22.532 STDOUT terraform:  + protected = (known after apply) 2025-06-03 14:49:22.532203 | orchestrator | 14:49:22.532 STDOUT terraform:  + region = (known after apply) 2025-06-03 14:49:22.532242 | orchestrator | 14:49:22.532 STDOUT terraform:  + schema = (known after apply) 2025-06-03 14:49:22.532275 | orchestrator | 14:49:22.532 STDOUT terraform:  + size_bytes = (known after apply) 2025-06-03 14:49:22.532294 | orchestrator | 14:49:22.532 STDOUT terraform:  + tags = (known after apply) 2025-06-03 14:49:22.532325 | orchestrator | 14:49:22.532 STDOUT terraform:  + updated_at = (known after apply) 2025-06-03 14:49:22.532331 | orchestrator | 14:49:22.532 STDOUT terraform:  } 2025-06-03 14:49:22.532366 | orchestrator | 14:49:22.532 STDOUT terraform:  # local_file.MANAGER_ADDRESS will be created 2025-06-03 14:49:22.532395 | orchestrator | 14:49:22.532 STDOUT terraform:  + resource "local_file" "MANAGER_ADDRESS" { 2025-06-03 14:49:22.532431 | orchestrator | 14:49:22.532 STDOUT terraform:  + content = (known after apply) 2025-06-03 14:49:22.532467 | orchestrator | 14:49:22.532 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-06-03 14:49:22.532503 | orchestrator | 14:49:22.532 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-06-03 14:49:22.532539 | orchestrator | 14:49:22.532 STDOUT terraform:  + content_md5 = (known after apply) 2025-06-03 14:49:22.532575 | orchestrator | 14:49:22.532 STDOUT terraform:  + content_sha1 = (known after apply) 2025-06-03 14:49:22.532611 | orchestrator | 14:49:22.532 STDOUT terraform:  + content_sha256 = (known after apply) 2025-06-03 14:49:22.532646 | orchestrator | 14:49:22.532 STDOUT terraform:  + content_sha512 = (known after apply) 2025-06-03 14:49:22.532669 | orchestrator | 14:49:22.532 STDOUT terraform:  + directory_permission = "0777" 2025-06-03 14:49:22.532693 | orchestrator | 14:49:22.532 STDOUT terraform:  + file_permission = "0644" 2025-06-03 14:49:22.532729 | orchestrator | 14:49:22.532 STDOUT terraform:  + filename = ".MANAGER_ADDRESS.ci" 2025-06-03 14:49:22.532766 | orchestrator | 14:49:22.532 STDOUT terraform:  + id = (known after apply) 2025-06-03 14:49:22.532772 | orchestrator | 14:49:22.532 STDOUT terraform:  } 2025-06-03 14:49:22.532815 | orchestrator | 14:49:22.532 STDOUT terraform:  # local_file.id_rsa_pub will be created 2025-06-03 14:49:22.532839 | orchestrator | 14:49:22.532 STDOUT terraform:  + resource "local_file" "id_rsa_pub" { 2025-06-03 14:49:22.532876 | orchestrator | 14:49:22.532 STDOUT terraform:  + content = (known after apply) 2025-06-03 14:49:22.532910 | orchestrator | 14:49:22.532 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-06-03 14:49:22.532944 | orchestrator | 14:49:22.532 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-06-03 14:49:22.532980 | orchestrator | 14:49:22.532 STDOUT terraform:  + content_md5 = (known after apply) 2025-06-03 14:49:22.533015 | orchestrator | 14:49:22.532 STDOUT terraform:  + content_sha1 = (known after apply) 2025-06-03 14:49:22.533051 | orchestrator | 14:49:22.533 STDOUT terraform:  + content_sha256 = (known after apply) 2025-06-03 14:49:22.533087 | orchestrator | 14:49:22.533 STDOUT terraform:  + content_sha512 = (known after apply) 2025-06-03 14:49:22.533110 | orchestrator | 14:49:22.533 STDOUT terraform:  + directory_permission = "0777" 2025-06-03 14:49:22.533134 | orchestrator | 14:49:22.533 STDOUT terraform:  + file_permission = "0644" 2025-06-03 14:49:22.533166 | orchestrator | 14:49:22.533 STDOUT terraform:  + filename = ".id_rsa.ci.pub" 2025-06-03 14:49:22.533201 | orchestrator | 14:49:22.533 STDOUT terraform:  + id = (known after apply) 2025-06-03 14:49:22.533208 | orchestrator | 14:49:22.533 STDOUT terraform:  } 2025-06-03 14:49:22.533239 | orchestrator | 14:49:22.533 STDOUT terraform:  # local_file.inventory will be created 2025-06-03 14:49:22.533268 | orchestrator | 14:49:22.533 STDOUT terraform:  + resource "local_file" "inventory" { 2025-06-03 14:49:22.533303 | orchestrator | 14:49:22.533 STDOUT terraform:  + content = (known after apply) 2025-06-03 14:49:22.533337 | orchestrator | 14:49:22.533 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-06-03 14:49:22.533372 | orchestrator | 14:49:22.533 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-06-03 14:49:22.533410 | orchestrator | 14:49:22.533 STDOUT terraform:  + content_md5 = (known after apply) 2025-06-03 14:49:22.533448 | orchestrator | 14:49:22.533 STDOUT terraform:  + content_sha1 = (known after apply) 2025-06-03 14:49:22.533480 | orchestrator | 14:49:22.533 STDOUT terraform:  + content_sha256 = (known after apply) 2025-06-03 14:49:22.533515 | orchestrator | 14:49:22.533 STDOUT terraform:  + content_sha512 = (known after apply) 2025-06-03 14:49:22.533538 | orchestrator | 14:49:22.533 STDOUT terraform:  + directory_permission = "0777" 2025-06-03 14:49:22.533563 | orchestrator | 14:49:22.533 STDOUT terraform:  + file_permission = "0644" 2025-06-03 14:49:22.533593 | orchestrator | 14:49:22.533 STDOUT terraform:  + filename = "inventory.ci" 2025-06-03 14:49:22.533629 | orchestrator | 14:49:22.533 STDOUT terraform:  + id = (known after apply) 2025-06-03 14:49:22.533635 | orchestrator | 14:49:22.533 STDOUT terraform:  } 2025-06-03 14:49:22.533668 | orchestrator | 14:49:22.533 STDOUT terraform:  # local_sensitive_file.id_rsa will be created 2025-06-03 14:49:22.533699 | orchestrator | 14:49:22.533 STDOUT terraform:  + resource "local_sensitive_file" "id_rsa" { 2025-06-03 14:49:22.533731 | orchestrator | 14:49:22.533 STDOUT terraform:  + content = (sensitive value) 2025-06-03 14:49:22.533767 | orchestrator | 14:49:22.533 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-06-03 14:49:22.533802 | orchestrator | 14:49:22.533 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-06-03 14:49:22.533836 | orchestrator | 14:49:22.533 STDOUT terraform:  + content_md5 = (known after apply) 2025-06-03 14:49:22.533871 | orchestrator | 14:49:22.533 STDOUT terraform:  + content_sha1 = (known after apply) 2025-06-03 14:49:22.533907 | orchestrator | 14:49:22.533 STDOUT terraform:  + content_sha256 = (known after apply) 2025-06-03 14:49:22.533941 | orchestrator | 14:49:22.533 STDOUT terraform:  + content_sha512 = (known after apply) 2025-06-03 14:49:22.533964 | orchestrator | 14:49:22.533 STDOUT terraform:  + directory_permission = "0700" 2025-06-03 14:49:22.533988 | orchestrator | 14:49:22.533 STDOUT terraform:  + file_permission = "0600" 2025-06-03 14:49:22.534027 | orchestrator | 14:49:22.533 STDOUT terraform:  + filename = ".id_rsa.ci" 2025-06-03 14:49:22.534069 | orchestrator | 14:49:22.534 STDOUT terraform:  + id = (known after apply) 2025-06-03 14:49:22.534076 | orchestrator | 14:49:22.534 STDOUT terraform:  } 2025-06-03 14:49:22.534109 | orchestrator | 14:49:22.534 STDOUT terraform:  # null_resource.node_semaphore will be created 2025-06-03 14:49:22.534138 | orchestrator | 14:49:22.534 STDOUT terraform:  + resource "null_resource" "node_semaphore" { 2025-06-03 14:49:22.534159 | orchestrator | 14:49:22.534 STDOUT terraform:  + id = (known after apply) 2025-06-03 14:49:22.534165 | orchestrator | 14:49:22.534 STDOUT terraform:  } 2025-06-03 14:49:22.534224 | orchestrator | 14:49:22.534 STDOUT terraform:  # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2025-06-03 14:49:22.534291 | orchestrator | 14:49:22.534 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2025-06-03 14:49:22.534328 | orchestrator | 14:49:22.534 STDOUT terraform:  + attachment = (known after apply) 2025-06-03 14:49:22.534352 | orchestrator | 14:49:22.534 STDOUT terraform:  + availability_zone = "nova" 2025-06-03 14:49:22.534388 | orchestrator | 14:49:22.534 STDOUT terraform:  + id = (known after apply) 2025-06-03 14:49:22.534423 | orchestrator | 14:49:22.534 STDOUT terraform:  + image_id = (known after apply) 2025-06-03 14:49:22.534458 | orchestrator | 14:49:22.534 STDOUT terraform:  + metadata = (known after apply) 2025-06-03 14:49:22.534502 | orchestrator | 14:49:22.534 STDOUT terraform:  + name = "testbed-volume-manager-base" 2025-06-03 14:49:22.534542 | orchestrator | 14:49:22.534 STDOUT terraform:  + region = (known after apply) 2025-06-03 14:49:22.534562 | orchestrator | 14:49:22.534 STDOUT terraform:  + size = 80 2025-06-03 14:49:22.534587 | orchestrator | 14:49:22.534 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-03 14:49:22.534611 | orchestrator | 14:49:22.534 STDOUT terraform:  + volume_type = "ssd" 2025-06-03 14:49:22.534618 | orchestrator | 14:49:22.534 STDOUT terraform:  } 2025-06-03 14:49:22.534669 | orchestrator | 14:49:22.534 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2025-06-03 14:49:22.534714 | orchestrator | 14:49:22.534 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-06-03 14:49:22.534751 | orchestrator | 14:49:22.534 STDOUT terraform:  + attachment = (known after apply) 2025-06-03 14:49:22.534776 | orchestrator | 14:49:22.534 STDOUT terraform:  + availability_zone = "nova" 2025-06-03 14:49:22.534818 | orchestrator | 14:49:22.534 STDOUT terraform:  + id = (known after apply) 2025-06-03 14:49:22.534853 | orchestrator | 14:49:22.534 STDOUT terraform:  + image_id = (known after apply) 2025-06-03 14:49:22.534889 | orchestrator | 14:49:22.534 STDOUT terraform:  + metadata = (known after apply) 2025-06-03 14:49:22.534933 | orchestrator | 14:49:22.534 STDOUT terraform:  + name = "testbed-volume-0-node-base" 2025-06-03 14:49:22.534969 | orchestrator | 14:49:22.534 STDOUT terraform:  + region = (known after apply) 2025-06-03 14:49:22.534990 | orchestrator | 14:49:22.534 STDOUT terraform:  + size = 80 2025-06-03 14:49:22.535014 | orchestrator | 14:49:22.534 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-03 14:49:22.535038 | orchestrator | 14:49:22.535 STDOUT terraform:  + volume_type = "ssd" 2025-06-03 14:49:22.535045 | orchestrator | 14:49:22.535 STDOUT terraform:  } 2025-06-03 14:49:22.535127 | orchestrator | 14:49:22.535 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2025-06-03 14:49:22.535174 | orchestrator | 14:49:22.535 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-06-03 14:49:22.535227 | orchestrator | 14:49:22.535 STDOUT terraform:  + attachment = (known after apply) 2025-06-03 14:49:22.535300 | orchestrator | 14:49:22.535 STDOUT terraform:  + availability_zone = "nova" 2025-06-03 14:49:22.535333 | orchestrator | 14:49:22.535 STDOUT terraform:  + id = (known after apply) 2025-06-03 14:49:22.535372 | orchestrator | 14:49:22.535 STDOUT terraform:  + image_id = (known after apply) 2025-06-03 14:49:22.535404 | orchestrator | 14:49:22.535 STDOUT terraform:  + metadata = (known after apply) 2025-06-03 14:49:22.535448 | orchestrator | 14:49:22.535 STDOUT terraform:  + name = "testbed-volume-1-node-base" 2025-06-03 14:49:22.535484 | orchestrator | 14:49:22.535 STDOUT terraform:  + region = (known after apply) 2025-06-03 14:49:22.535505 | orchestrator | 14:49:22.535 STDOUT terraform:  + size = 80 2025-06-03 14:49:22.535529 | orchestrator | 14:49:22.535 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-03 14:49:22.535554 | orchestrator | 14:49:22.535 STDOUT terraform:  + volume_type = "ssd" 2025-06-03 14:49:22.535560 | orchestrator | 14:49:22.535 STDOUT terraform:  } 2025-06-03 14:49:22.535610 | orchestrator | 14:49:22.535 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2025-06-03 14:49:22.535656 | orchestrator | 14:49:22.535 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-06-03 14:49:22.535691 | orchestrator | 14:49:22.535 STDOUT terraform:  + attachment = (known after apply) 2025-06-03 14:49:22.535716 | orchestrator | 14:49:22.535 STDOUT terraform:  + availability_zone = "nova" 2025-06-03 14:49:22.535752 | orchestrator | 14:49:22.535 STDOUT terraform:  + id = (known after apply) 2025-06-03 14:49:22.535788 | orchestrator | 14:49:22.535 STDOUT terraform:  + image_id = (known after apply) 2025-06-03 14:49:22.535823 | orchestrator | 14:49:22.535 STDOUT terraform:  + metadata = (known after apply) 2025-06-03 14:49:22.535868 | orchestrator | 14:49:22.535 STDOUT terraform:  + name = "testbed-volume-2-node-base" 2025-06-03 14:49:22.535904 | orchestrator | 14:49:22.535 STDOUT terraform:  + region = (known after apply) 2025-06-03 14:49:22.535924 | orchestrator | 14:49:22.535 STDOUT terraform:  + size = 80 2025-06-03 14:49:22.535949 | orchestrator | 14:49:22.535 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-03 14:49:22.535975 | orchestrator | 14:49:22.535 STDOUT terraform:  + volume_type = "ssd" 2025-06-03 14:49:22.535982 | orchestrator | 14:49:22.535 STDOUT terraform:  } 2025-06-03 14:49:22.536031 | orchestrator | 14:49:22.535 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2025-06-03 14:49:22.536078 | orchestrator | 14:49:22.536 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-06-03 14:49:22.536117 | orchestrator | 14:49:22.536 STDOUT terraform:  + attachment = (known after apply) 2025-06-03 14:49:22.536128 | orchestrator | 14:49:22.536 STDOUT terraform:  + availability_zone = "nova" 2025-06-03 14:49:22.536171 | orchestrator | 14:49:22.536 STDOUT terraform:  + id = (known after apply) 2025-06-03 14:49:22.536207 | orchestrator | 14:49:22.536 STDOUT terraform:  + image_id = (known after apply) 2025-06-03 14:49:22.536258 | orchestrator | 14:49:22.536 STDOUT terraform:  + metadata = (known after apply) 2025-06-03 14:49:22.536300 | orchestrator | 14:49:22.536 STDOUT terraform:  + name = "testbed-volume-3-node-base" 2025-06-03 14:49:22.536334 | orchestrator | 14:49:22.536 STDOUT terraform:  + region = (known after apply) 2025-06-03 14:49:22.536346 | orchestrator | 14:49:22.536 STDOUT terraform:  + size = 80 2025-06-03 14:49:22.536375 | orchestrator | 14:49:22.536 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-03 14:49:22.536399 | orchestrator | 14:49:22.536 STDOUT terraform:  + volume_type = "ssd" 2025-06-03 14:49:22.536405 | orchestrator | 14:49:22.536 STDOUT terraform:  } 2025-06-03 14:49:22.536454 | orchestrator | 14:49:22.536 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2025-06-03 14:49:22.536501 | orchestrator | 14:49:22.536 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-06-03 14:49:22.536538 | orchestrator | 14:49:22.536 STDOUT terraform:  + attachment = (known after apply) 2025-06-03 14:49:22.536561 | orchestrator | 14:49:22.536 STDOUT terraform:  + availability_zone = "nova" 2025-06-03 14:49:22.536597 | orchestrator | 14:49:22.536 STDOUT terraform:  + id = (known after apply) 2025-06-03 14:49:22.536631 | orchestrator | 14:49:22.536 STDOUT terraform:  + image_id = (known after apply) 2025-06-03 14:49:22.536668 | orchestrator | 14:49:22.536 STDOUT terraform:  + metadata = (known after apply) 2025-06-03 14:49:22.536711 | orchestrator | 14:49:22.536 STDOUT terraform:  + name = "testbed-volume-4-node-base" 2025-06-03 14:49:22.536748 | orchestrator | 14:49:22.536 STDOUT terraform:  + region = (known after apply) 2025-06-03 14:49:22.536796 | orchestrator | 14:49:22.536 STDOUT terraform:  + size = 80 2025-06-03 14:49:22.536807 | orchestrator | 14:49:22.536 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-03 14:49:22.536812 | orchestrator | 14:49:22.536 STDOUT terraform:  + volume_type = "ssd" 2025-06-03 14:49:22.536817 | orchestrator | 14:49:22.536 STDOUT terraform:  } 2025-06-03 14:49:22.536861 | orchestrator | 14:49:22.536 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2025-06-03 14:49:22.536905 | orchestrator | 14:49:22.536 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-06-03 14:49:22.536940 | orchestrator | 14:49:22.536 STDOUT terraform:  + attachment = (known after apply) 2025-06-03 14:49:22.536964 | orchestrator | 14:49:22.536 STDOUT terraform:  + availability_zone = "nova" 2025-06-03 14:49:22.536999 | orchestrator | 14:49:22.536 STDOUT terraform:  + id = (known after apply) 2025-06-03 14:49:22.537034 | orchestrator | 14:49:22.536 STDOUT terraform:  + image_id = (known after apply) 2025-06-03 14:49:22.537069 | orchestrator | 14:49:22.537 STDOUT terraform:  + metadata = (known after apply) 2025-06-03 14:49:22.537114 | orchestrator | 14:49:22.537 STDOUT terraform:  + name = "testbed-volume-5-node-base" 2025-06-03 14:49:22.537150 | orchestrator | 14:49:22.537 STDOUT terraform:  + region = (known after apply) 2025-06-03 14:49:22.537167 | orchestrator | 14:49:22.537 STDOUT terraform:  + size = 80 2025-06-03 14:49:22.537192 | orchestrator | 14:49:22.537 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-03 14:49:22.537239 | orchestrator | 14:49:22.537 STDOUT terraform:  + volume_type = "ssd" 2025-06-03 14:49:22.537245 | orchestrator | 14:49:22.537 STDOUT terraform:  } 2025-06-03 14:49:22.537281 | orchestrator | 14:49:22.537 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[0] will be created 2025-06-03 14:49:22.537323 | orchestrator | 14:49:22.537 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-03 14:49:22.537358 | orchestrator | 14:49:22.537 STDOUT terraform:  + attachment = (known after apply) 2025-06-03 14:49:22.537383 | orchestrator | 14:49:22.537 STDOUT terraform:  + availability_zone = "nova" 2025-06-03 14:49:22.537420 | orchestrator | 14:49:22.537 STDOUT terraform:  + id = (known after apply) 2025-06-03 14:49:22.537454 | orchestrator | 14:49:22.537 STDOUT terraform:  + metadata = (known after apply) 2025-06-03 14:49:22.537492 | orchestrator | 14:49:22.537 STDOUT terraform:  + name = "testbed-volume-0-node-3" 2025-06-03 14:49:22.537583 | orchestrator | 14:49:22.537 STDOUT terraform:  + region = (known after apply) 2025-06-03 14:49:22.537612 | orchestrator | 14:49:22.537 STDOUT terraform:  + size = 20 2025-06-03 14:49:22.537636 | orchestrator | 14:49:22.537 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-03 14:49:22.537661 | orchestrator | 14:49:22.537 STDOUT terraform:  + volume_type = "ssd" 2025-06-03 14:49:22.537678 | orchestrator | 14:49:22.537 STDOUT terraform:  } 2025-06-03 14:49:22.537736 | orchestrator | 14:49:22.537 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[1] will be created 2025-06-03 14:49:22.537784 | orchestrator | 14:49:22.537 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-03 14:49:22.537817 | orchestrator | 14:49:22.537 STDOUT terraform:  + attachment = (known after apply) 2025-06-03 14:49:22.537843 | orchestrator | 14:49:22.537 STDOUT terraform:  + availability_zone = "nova" 2025-06-03 14:49:22.537881 | orchestrator | 14:49:22.537 STDOUT terraform:  + id = (known after apply) 2025-06-03 14:49:22.537916 | orchestrator | 14:49:22.537 STDOUT terraform:  + metadata = (known after apply) 2025-06-03 14:49:22.537960 | orchestrator | 14:49:22.537 STDOUT terraform:  + name = "testbed-volume-1-node-4" 2025-06-03 14:49:22.537998 | orchestrator | 14:49:22.537 STDOUT terraform:  + region = (known after apply) 2025-06-03 14:49:22.538031 | orchestrator | 14:49:22.537 STDOUT terraform:  + size = 20 2025-06-03 14:49:22.538062 | orchestrator | 14:49:22.538 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-03 14:49:22.538086 | orchestrator | 14:49:22.538 STDOUT terraform:  + volume_type = "ssd" 2025-06-03 14:49:22.538093 | orchestrator | 14:49:22.538 STDOUT terraform:  } 2025-06-03 14:49:22.538138 | orchestrator | 14:49:22.538 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[2] will be created 2025-06-03 14:49:22.538181 | orchestrator | 14:49:22.538 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-03 14:49:22.538251 | orchestrator | 14:49:22.538 STDOUT terraform:  + attachment = (known after apply) 2025-06-03 14:49:22.538292 | orchestrator | 14:49:22.538 STDOUT terraform:  + availability_zone = "nova" 2025-06-03 14:49:22.538330 | orchestrator | 14:49:22.538 STDOUT terraform:  + id = (known after apply) 2025-06-03 14:49:22.538376 | orchestrator | 14:49:22.538 STDOUT terraform:  + metadata = (known after apply) 2025-06-03 14:49:22.538416 | orchestrator | 14:49:22.538 STDOUT terraform:  + name = "testbed-volume-2-node-5" 2025-06-03 14:49:22.538473 | orchestrator | 14:49:22.538 STDOUT terraform:  + region = (known after apply) 2025-06-03 14:49:22.538496 | orchestrator | 14:49:22.538 STDOUT terraform:  + size = 20 2025-06-03 14:49:22.538520 | orchestrator | 14:49:22.538 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-03 14:49:22.538549 | orchestrator | 14:49:22.538 STDOUT terraform:  + volume_type = "ssd" 2025-06-03 14:49:22.538555 | orchestrator | 14:49:22.538 STDOUT terraform:  } 2025-06-03 14:49:22.538602 | orchestrator | 14:49:22.538 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[3] will be created 2025-06-03 14:49:22.538649 | orchestrator | 14:49:22.538 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-03 14:49:22.538689 | orchestrator | 14:49:22.538 STDOUT terraform:  + attachment = (known after apply) 2025-06-03 14:49:22.538719 | orchestrator | 14:49:22.538 STDOUT terraform:  + availability_zone = "nova" 2025-06-03 14:49:22.538756 | orchestrator | 14:49:22.538 STDOUT terraform:  + id = (known after apply) 2025-06-03 14:49:22.538792 | orchestrator | 14:49:22.538 STDOUT terraform:  + metadata = (known after apply) 2025-06-03 14:49:22.538830 | orchestrator | 14:49:22.538 STDOUT terraform:  + name = "testbed-volume-3-node-3" 2025-06-03 14:49:22.538865 | orchestrator | 14:49:22.538 STDOUT terraform:  + region = (known after apply) 2025-06-03 14:49:22.538882 | orchestrator | 14:49:22.538 STDOUT terraform:  + size = 20 2025-06-03 14:49:22.538909 | orchestrator | 14:49:22.538 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-03 14:49:22.538933 | orchestrator | 14:49:22.538 STDOUT terraform:  + volume_type = "ssd" 2025-06-03 14:49:22.538940 | orchestrator | 14:49:22.538 STDOUT terraform:  } 2025-06-03 14:49:22.538992 | orchestrator | 14:49:22.538 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[4] will be created 2025-06-03 14:49:22.539034 | orchestrator | 14:49:22.538 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-03 14:49:22.539076 | orchestrator | 14:49:22.539 STDOUT terraform:  + attachment = (known after apply) 2025-06-03 14:49:22.539100 | orchestrator | 14:49:22.539 STDOUT terraform:  + availability_zone = "nova" 2025-06-03 14:49:22.539136 | orchestrator | 14:49:22.539 STDOUT terraform:  + id = (known after apply) 2025-06-03 14:49:22.539171 | orchestrator | 14:49:22.539 STDOUT terraform:  + metadata = (known after apply) 2025-06-03 14:49:22.539226 | orchestrator | 14:49:22.539 STDOUT terraform:  + name = "testbed-volume-4-node-4" 2025-06-03 14:49:22.539340 | orchestrator | 14:49:22.539 STDOUT terraform:  + region = (known after apply) 2025-06-03 14:49:22.539348 | orchestrator | 14:49:22.539 STDOUT terraform:  + size = 20 2025-06-03 14:49:22.539382 | orchestrator | 14:49:22.539 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-03 14:49:22.539406 | orchestrator | 14:49:22.539 STDOUT terraform:  + volume_type = "ssd" 2025-06-03 14:49:22.539437 | orchestrator | 14:49:22.539 STDOUT terraform:  } 2025-06-03 14:49:22.539487 | orchestrator | 14:49:22.539 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[5] will be created 2025-06-03 14:49:22.539533 | orchestrator | 14:49:22.539 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-03 14:49:22.539571 | orchestrator | 14:49:22.539 STDOUT terraform:  + attachment = (known after apply) 2025-06-03 14:49:22.539596 | orchestrator | 14:49:22.539 STDOUT terraform:  + availability_zone = "nova" 2025-06-03 14:49:22.539632 | orchestrator | 14:49:22.539 STDOUT terraform:  + id = (known after apply) 2025-06-03 14:49:22.539673 | orchestrator | 14:49:22.539 STDOUT terraform:  + metadata = (known after apply) 2025-06-03 14:49:22.539712 | orchestrator | 14:49:22.539 STDOUT terraform:  + name = "testbed-volume-5-node-5" 2025-06-03 14:49:22.539750 | orchestrator | 14:49:22.539 STDOUT terraform:  + region = (known after apply) 2025-06-03 14:49:22.539767 | orchestrator | 14:49:22.539 STDOUT terraform:  + size = 20 2025-06-03 14:49:22.539797 | orchestrator | 14:49:22.539 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-03 14:49:22.539821 | orchestrator | 14:49:22.539 STDOUT terraform:  + volume_type = "ssd" 2025-06-03 14:49:22.539829 | orchestrator | 14:49:22.539 STDOUT terraform:  } 2025-06-03 14:49:22.539879 | orchestrator | 14:49:22.539 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[6] will be created 2025-06-03 14:49:22.539921 | orchestrator | 14:49:22.539 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-03 14:49:22.539957 | orchestrator | 14:49:22.539 STDOUT terraform:  + attachment = (known after apply) 2025-06-03 14:49:22.539980 | orchestrator | 14:49:22.539 STDOUT terraform:  + availability_zone = "nova" 2025-06-03 14:49:22.540023 | orchestrator | 14:49:22.539 STDOUT terraform:  + id = (known after apply) 2025-06-03 14:49:22.540059 | orchestrator | 14:49:22.540 STDOUT terraform:  + metadata = (known after apply) 2025-06-03 14:49:22.540097 | orchestrator | 14:49:22.540 STDOUT terraform:  + name = "testbed-volume-6-node-3" 2025-06-03 14:49:22.540134 | orchestrator | 14:49:22.540 STDOUT terraform:  + region = (known after apply) 2025-06-03 14:49:22.540163 | orchestrator | 14:49:22.540 STDOUT terraform:  + size = 20 2025-06-03 14:49:22.540194 | orchestrator | 14:49:22.540 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-03 14:49:22.540227 | orchestrator | 14:49:22.540 STDOUT terraform:  + volume_type = "ssd" 2025-06-03 14:49:22.540237 | orchestrator | 14:49:22.540 STDOUT terraform:  } 2025-06-03 14:49:22.540297 | orchestrator | 14:49:22.540 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[7] will be created 2025-06-03 14:49:22.540339 | orchestrator | 14:49:22.540 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-03 14:49:22.540374 | orchestrator | 14:49:22.540 STDOUT terraform:  + attachment = (known after apply) 2025-06-03 14:49:22.540407 | orchestrator | 14:49:22.540 STDOUT terraform:  + availability_zone = "nova" 2025-06-03 14:49:22.540444 | orchestrator | 14:49:22.540 STDOUT terraform:  + id = (known after apply) 2025-06-03 14:49:22.540484 | orchestrator | 14:49:22.540 STDOUT terraform:  + metadata = (known after apply) 2025-06-03 14:49:22.540522 | orchestrator | 14:49:22.540 STDOUT terraform:  + name = "testbed-volume-7-node-4" 2025-06-03 14:49:22.540560 | orchestrator | 14:49:22.540 STDOUT terraform:  + region = (known after apply) 2025-06-03 14:49:22.540586 | orchestrator | 14:49:22.540 STDOUT terraform:  + size = 20 2025-06-03 14:49:22.540609 | orchestrator | 14:49:22.540 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-03 14:49:22.540638 | orchestrator | 14:49:22.540 STDOUT terraform:  + volume_type = "ssd" 2025-06-03 14:49:22.540644 | orchestrator | 14:49:22.540 STDOUT terraform:  } 2025-06-03 14:49:22.540708 | orchestrator | 14:49:22.540 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[8] will be created 2025-06-03 14:49:22.540738 | orchestrator | 14:49:22.540 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-03 14:49:22.540772 | orchestrator | 14:49:22.540 STDOUT terraform:  + attachment = (known after apply) 2025-06-03 14:49:22.540811 | orchestrator | 14:49:22.540 STDOUT terraform:  + availability_zone = "nova" 2025-06-03 14:49:22.540829 | orchestrator | 14:49:22.540 STDOUT terraform:  + id = (known after apply) 2025-06-03 14:49:22.540865 | orchestrator | 14:49:22.540 STDOUT terraform:  + metadata = (known after apply) 2025-06-03 14:49:22.540910 | orchestrator | 14:49:22.540 STDOUT terraform:  + name = "testbed-volume-8-node-5" 2025-06-03 14:49:22.540951 | orchestrator | 14:49:22.540 STDOUT terraform:  + region = (known after apply) 2025-06-03 14:49:22.540968 | orchestrator | 14:49:22.540 STDOUT terraform:  + size = 20 2025-06-03 14:49:22.540993 | orchestrator | 14:49:22.540 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-03 14:49:22.541016 | orchestrator | 14:49:22.540 STDOUT terraform:  + volume_type = "ssd" 2025-06-03 14:49:22.541022 | orchestrator | 14:49:22.541 STDOUT terraform:  } 2025-06-03 14:49:22.541068 | orchestrator | 14:49:22.541 STDOUT terraform:  # openstack_compute_instance_v2.manager_server will be created 2025-06-03 14:49:22.541120 | orchestrator | 14:49:22.541 STDOUT terraform:  + resource "openstack_compute_instance_v2" "manager_server" { 2025-06-03 14:49:22.541147 | orchestrator | 14:49:22.541 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-06-03 14:49:22.541185 | orchestrator | 14:49:22.541 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-06-03 14:49:22.541243 | orchestrator | 14:49:22.541 STDOUT terraform:  + all_metadata = (known after apply) 2025-06-03 14:49:22.541292 | orchestrator | 14:49:22.541 STDOUT terraform:  + all_tags = (known after apply) 2025-06-03 14:49:22.541317 | orchestrator | 14:49:22.541 STDOUT terraform:  + availability_zone = "nova" 2025-06-03 14:49:22.541348 | orchestrator | 14:49:22.541 STDOUT terraform:  + config_drive = true 2025-06-03 14:49:22.541383 | orchestrator | 14:49:22.541 STDOUT terraform:  + created = (known after apply) 2025-06-03 14:49:22.541420 | orchestrator | 14:49:22.541 STDOUT terraform:  + flavor_id = (known after apply) 2025-06-03 14:49:22.541449 | orchestrator | 14:49:22.541 STDOUT terraform:  + flavor_name = "OSISM-4V-16" 2025-06-03 14:49:22.541473 | orchestrator | 14:49:22.541 STDOUT terraform:  + force_delete = false 2025-06-03 14:49:22.541511 | orchestrator | 14:49:22.541 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-06-03 14:49:22.541550 | orchestrator | 14:49:22.541 STDOUT terraform:  + id = (known after apply) 2025-06-03 14:49:22.541591 | orchestrator | 14:49:22.541 STDOUT terraform:  + image_id = (known after apply) 2025-06-03 14:49:22.541629 | orchestrator | 14:49:22.541 STDOUT terraform:  + image_name = (known after apply) 2025-06-03 14:49:22.541653 | orchestrator | 14:49:22.541 STDOUT terraform:  + key_pair = "testbed" 2025-06-03 14:49:22.541684 | orchestrator | 14:49:22.541 STDOUT terraform:  + name = "testbed-manager" 2025-06-03 14:49:22.541710 | orchestrator | 14:49:22.541 STDOUT terraform:  + power_state = "active" 2025-06-03 14:49:22.541746 | orchestrator | 14:49:22.541 STDOUT terraform:  + region = (known after apply) 2025-06-03 14:49:22.541780 | orchestrator | 14:49:22.541 STDOUT terraform:  + security_groups = (known after apply) 2025-06-03 14:49:22.541806 | orchestrator | 14:49:22.541 STDOUT terraform:  + stop_before_destroy = false 2025-06-03 14:49:22.541841 | orchestrator | 14:49:22.541 STDOUT terraform:  + updated = (known after apply) 2025-06-03 14:49:22.541885 | orchestrator | 14:49:22.541 STDOUT terraform:  + user_data = (known after apply) 2025-06-03 14:49:22.541892 | orchestrator | 14:49:22.541 STDOUT terraform:  + block_device { 2025-06-03 14:49:22.541919 | orchestrator | 14:49:22.541 STDOUT terraform:  + boot_index = 0 2025-06-03 14:49:22.541951 | orchestrator | 14:49:22.541 STDOUT terraform:  + delete_on_termination = false 2025-06-03 14:49:22.541981 | orchestrator | 14:49:22.541 STDOUT terraform:  + destination_type = "volume" 2025-06-03 14:49:22.542008 | orchestrator | 14:49:22.541 STDOUT terraform:  + multiattach = false 2025-06-03 14:49:22.542061 | orchestrator | 14:49:22.542 STDOUT terraform:  + source_type = "volume" 2025-06-03 14:49:22.542100 | orchestrator | 14:49:22.542 STDOUT terraform:  + uuid = (known after apply) 2025-06-03 14:49:22.542106 | orchestrator | 14:49:22.542 STDOUT terraform:  } 2025-06-03 14:49:22.542122 | orchestrator | 14:49:22.542 STDOUT terraform:  + network { 2025-06-03 14:49:22.542144 | orchestrator | 14:49:22.542 STDOUT terraform:  + access_network = false 2025-06-03 14:49:22.542172 | orchestrator | 14:49:22.542 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-06-03 14:49:22.542251 | orchestrator | 14:49:22.542 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-06-03 14:49:22.542269 | orchestrator | 14:49:22.542 STDOUT terraform:  + mac = (known after apply) 2025-06-03 14:49:22.542299 | orchestrator | 14:49:22.542 STDOUT terraform:  + name = (known after apply) 2025-06-03 14:49:22.542330 | orchestrator | 14:49:22.542 STDOUT terraform:  + port = (known after apply) 2025-06-03 14:49:22.542360 | orchestrator | 14:49:22.542 STDOUT terraform:  + uuid = (known after apply) 2025-06-03 14:49:22.542371 | orchestrator | 14:49:22.542 STDOUT terraform:  } 2025-06-03 14:49:22.542377 | orchestrator | 14:49:22.542 STDOUT terraform:  } 2025-06-03 14:49:22.542430 | orchestrator | 14:49:22.542 STDOUT terraform:  # openstack_compute_instance_v2.node_server[0] will be created 2025-06-03 14:49:22.542471 | orchestrator | 14:49:22.542 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-06-03 14:49:22.542506 | orchestrator | 14:49:22.542 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-06-03 14:49:22.542540 | orchestrator | 14:49:22.542 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-06-03 14:49:22.542579 | orchestrator | 14:49:22.542 STDOUT terraform:  + all_metadata = (known after apply) 2025-06-03 14:49:22.542619 | orchestrator | 14:49:22.542 STDOUT terraform:  + all_tags = (known after apply) 2025-06-03 14:49:22.542643 | orchestrator | 14:49:22.542 STDOUT terraform:  + availability_zone = "nova" 2025-06-03 14:49:22.542660 | orchestrator | 14:49:22.542 STDOUT terraform:  + config_drive = true 2025-06-03 14:49:22.542694 | orchestrator | 14:49:22.542 STDOUT terraform:  + created = (known after apply) 2025-06-03 14:49:22.542734 | orchestrator | 14:49:22.542 STDOUT terraform:  + flavor_id = (known after apply) 2025-06-03 14:49:22.542766 | orchestrator | 14:49:22.542 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-06-03 14:49:22.542788 | orchestrator | 14:49:22.542 STDOUT terraform:  + force_delete = false 2025-06-03 14:49:22.542825 | orchestrator | 14:49:22.542 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-06-03 14:49:22.542859 | orchestrator | 14:49:22.542 STDOUT terraform:  + id = (known after apply) 2025-06-03 14:49:22.542893 | orchestrator | 14:49:22.542 STDOUT terraform:  + image_id = (known after apply) 2025-06-03 14:49:22.542936 | orchestrator | 14:49:22.542 STDOUT terraform:  + image_name = (known after apply) 2025-06-03 14:49:22.542960 | orchestrator | 14:49:22.542 STDOUT terraform:  + key_pair = "testbed" 2025-06-03 14:49:22.542993 | orchestrator | 14:49:22.542 STDOUT terraform:  + name = "testbed-node-0" 2025-06-03 14:49:22.543017 | orchestrator | 14:49:22.542 STDOUT terraform:  + power_state = "active" 2025-06-03 14:49:22.543058 | orchestrator | 14:49:22.543 STDOUT terraform:  + region = (known after apply) 2025-06-03 14:49:22.543087 | orchestrator | 14:49:22.543 STDOUT terraform:  + security_groups = (known after apply) 2025-06-03 14:49:22.543121 | orchestrator | 14:49:22.543 STDOUT terraform:  + stop_before_destroy = false 2025-06-03 14:49:22.543166 | orchestrator | 14:49:22.543 STDOUT terraform:  + updated = (known after apply) 2025-06-03 14:49:22.543239 | orchestrator | 14:49:22.543 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-06-03 14:49:22.543246 | orchestrator | 14:49:22.543 STDOUT terraform:  + block_device { 2025-06-03 14:49:22.543263 | orchestrator | 14:49:22.543 STDOUT terraform:  + boot_index = 0 2025-06-03 14:49:22.543294 | orchestrator | 14:49:22.543 STDOUT terraform:  + delete_on_termination = false 2025-06-03 14:49:22.543323 | orchestrator | 14:49:22.543 STDOUT terraform:  + destination_type = "volume" 2025-06-03 14:49:22.543351 | orchestrator | 14:49:22.543 STDOUT terraform:  + multiattach = false 2025-06-03 14:49:22.543383 | orchestrator | 14:49:22.543 STDOUT terraform:  + source_type = "volume" 2025-06-03 14:49:22.543421 | orchestrator | 14:49:22.543 STDOUT terraform:  + uuid = (known after apply) 2025-06-03 14:49:22.543427 | orchestrator | 14:49:22.543 STDOUT terraform:  } 2025-06-03 14:49:22.543444 | orchestrator | 14:49:22.543 STDOUT terraform:  + network { 2025-06-03 14:49:22.543461 | orchestrator | 14:49:22.543 STDOUT terraform:  + access_network = false 2025-06-03 14:49:22.543489 | orchestrator | 14:49:22.543 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-06-03 14:49:22.543520 | orchestrator | 14:49:22.543 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-06-03 14:49:22.543560 | orchestrator | 14:49:22.543 STDOUT terraform:  + mac = (known after apply) 2025-06-03 14:49:22.543590 | orchestrator | 14:49:22.543 STDOUT terraform:  + name = (known after apply) 2025-06-03 14:49:22.543621 | orchestrator | 14:49:22.543 STDOUT terraform:  + port = (known after apply) 2025-06-03 14:49:22.543652 | orchestrator | 14:49:22.543 STDOUT terraform:  + uuid = (known after apply) 2025-06-03 14:49:22.543659 | orchestrator | 14:49:22.543 STDOUT terraform:  } 2025-06-03 14:49:22.543664 | orchestrator | 14:49:22.543 STDOUT terraform:  } 2025-06-03 14:49:22.543716 | orchestrator | 14:49:22.543 STDOUT terraform:  # openstack_compute_instance_v2.node_server[1] will be created 2025-06-03 14:49:22.543757 | orchestrator | 14:49:22.543 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-06-03 14:49:22.543792 | orchestrator | 14:49:22.543 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-06-03 14:49:22.543828 | orchestrator | 14:49:22.543 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-06-03 14:49:22.543867 | orchestrator | 14:49:22.543 STDOUT terraform:  + all_metadata = (known after apply) 2025-06-03 14:49:22.543907 | orchestrator | 14:49:22.543 STDOUT terraform:  + all_tags = (known after apply) 2025-06-03 14:49:22.543931 | orchestrator | 14:49:22.543 STDOUT terraform:  + availability_zone = "nova" 2025-06-03 14:49:22.543961 | orchestrator | 14:49:22.543 STDOUT terraform:  + config_drive = true 2025-06-03 14:49:22.543999 | orchestrator | 14:49:22.543 STDOUT terraform:  + created = (known after apply) 2025-06-03 14:49:22.544037 | orchestrator | 14:49:22.543 STDOUT terraform:  + flavor_id = (known after apply) 2025-06-03 14:49:22.544068 | orchestrator | 14:49:22.544 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-06-03 14:49:22.544092 | orchestrator | 14:49:22.544 STDOUT terraform:  + force_delete = false 2025-06-03 14:49:22.544127 | orchestrator | 14:49:22.544 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-06-03 14:49:22.544162 | orchestrator | 14:49:22.544 STDOUT terraform:  + id = (known after apply) 2025-06-03 14:49:22.544207 | orchestrator | 14:49:22.544 STDOUT terraform:  + image_id = (known after apply) 2025-06-03 14:49:22.544263 | orchestrator | 14:49:22.544 STDOUT terraform:  + image_name = (known after apply) 2025-06-03 14:49:22.544285 | orchestrator | 14:49:22.544 STDOUT terraform:  + key_pair = "testbed" 2025-06-03 14:49:22.544322 | orchestrator | 14:49:22.544 STDOUT terraform:  + name = "testbed-node-1" 2025-06-03 14:49:22.544347 | orchestrator | 14:49:22.544 STDOUT terraform:  + power_state = "active" 2025-06-03 14:49:22.544381 | orchestrator | 14:49:22.544 STDOUT terraform:  + region = (known after apply) 2025-06-03 14:49:22.544415 | orchestrator | 14:49:22.544 STDOUT terraform:  + security_groups = (known after apply) 2025-06-03 14:49:22.544443 | orchestrator | 14:49:22.544 STDOUT terraform:  + stop_before_destroy = false 2025-06-03 14:49:22.544481 | orchestrator | 14:49:22.544 STDOUT terraform:  + updated = (known after apply) 2025-06-03 14:49:22.544533 | orchestrator | 14:49:22.544 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-06-03 14:49:22.544539 | orchestrator | 14:49:22.544 STDOUT terraform:  + block_device { 2025-06-03 14:49:22.544571 | orchestrator | 14:49:22.544 STDOUT terraform:  + boot_index = 0 2025-06-03 14:49:22.544603 | orchestrator | 14:49:22.544 STDOUT terraform:  + delete_on_termination = false 2025-06-03 14:49:22.544631 | orchestrator | 14:49:22.544 STDOUT terraform:  + destination_type = "volume" 2025-06-03 14:49:22.544660 | orchestrator | 14:49:22.544 STDOUT terraform:  + multiattach = false 2025-06-03 14:49:22.544695 | orchestrator | 14:49:22.544 STDOUT terraform:  + source_type = "volume" 2025-06-03 14:49:22.544730 | orchestrator | 14:49:22.544 STDOUT terraform:  + uuid = (known after apply) 2025-06-03 14:49:22.544737 | orchestrator | 14:49:22.544 STDOUT terraform:  } 2025-06-03 14:49:22.544758 | orchestrator | 14:49:22.544 STDOUT terraform:  + network { 2025-06-03 14:49:22.544764 | orchestrator | 14:49:22.544 STDOUT terraform:  + access_network = false 2025-06-03 14:49:22.544800 | orchestrator | 14:49:22.544 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-06-03 14:49:22.544840 | orchestrator | 14:49:22.544 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-06-03 14:49:22.544880 | orchestrator | 14:49:22.544 STDOUT terraform:  + mac = (known after apply) 2025-06-03 14:49:22.544910 | orchestrator | 14:49:22.544 STDOUT terraform:  + name = (known after apply) 2025-06-03 14:49:22.544941 | orchestrator | 14:49:22.544 STDOUT terraform:  + port = (known after apply) 2025-06-03 14:49:22.544976 | orchestrator | 14:49:22.544 STDOUT terraform:  + uuid = (known after apply) 2025-06-03 14:49:22.544982 | orchestrator | 14:49:22.544 STDOUT terraform:  } 2025-06-03 14:49:22.544998 | orchestrator | 14:49:22.544 STDOUT terraform:  } 2025-06-03 14:49:22.545041 | orchestrator | 14:49:22.544 STDOUT terraform:  # openstack_compute_instance_v2.node_server[2] will be created 2025-06-03 14:49:22.545082 | orchestrator | 14:49:22.545 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-06-03 14:49:22.545116 | orchestrator | 14:49:22.545 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-06-03 14:49:22.545158 | orchestrator | 14:49:22.545 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-06-03 14:49:22.545192 | orchestrator | 14:49:22.545 STDOUT terraform:  + all_metadata = (known after apply) 2025-06-03 14:49:22.545253 | orchestrator | 14:49:22.545 STDOUT terraform:  + all_tags = (known after apply) 2025-06-03 14:49:22.545279 | orchestrator | 14:49:22.545 STDOUT terraform:  + availability_zone = "nova" 2025-06-03 14:49:22.545301 | orchestrator | 14:49:22.545 STDOUT terraform:  + config_drive = true 2025-06-03 14:49:22.545336 | orchestrator | 14:49:22.545 STDOUT terraform:  + created = (known after apply) 2025-06-03 14:49:22.545374 | orchestrator | 14:49:22.545 STDOUT terraform:  + flavor_id = (known after apply) 2025-06-03 14:49:22.545403 | orchestrator | 14:49:22.545 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-06-03 14:49:22.545426 | orchestrator | 14:49:22.545 STDOUT terraform:  + force_delete = false 2025-06-03 14:49:22.545462 | orchestrator | 14:49:22.545 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-06-03 14:49:22.545498 | orchestrator | 14:49:22.545 STDOUT terraform:  + id = (known after apply) 2025-06-03 14:49:22.545535 | orchestrator | 14:49:22.545 STDOUT terraform:  + image_id = (known after apply) 2025-06-03 14:49:22.545575 | orchestrator | 14:49:22.545 STDOUT terraform:  + image_name = (known after apply) 2025-06-03 14:49:22.545600 | orchestrator | 14:49:22.545 STDOUT terraform:  + key_pair = "testbed" 2025-06-03 14:49:22.545630 | orchestrator | 14:49:22.545 STDOUT terraform:  + name = "testbed-node-2" 2025-06-03 14:49:22.545655 | orchestrator | 14:49:22.545 STDOUT terraform:  + power_state = "active" 2025-06-03 14:49:22.545689 | orchestrator | 14:49:22.545 STDOUT terraform:  + region = (known after apply) 2025-06-03 14:49:22.545724 | orchestrator | 14:49:22.545 STDOUT terraform:  + security_groups = (known after apply) 2025-06-03 14:49:22.545760 | orchestrator | 14:49:22.545 STDOUT terraform:  + stop_before_destroy = false 2025-06-03 14:49:22.545803 | orchestrator | 14:49:22.545 STDOUT terraform:  + updated = (known after apply) 2025-06-03 14:49:22.545852 | orchestrator | 14:49:22.545 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-06-03 14:49:22.545858 | orchestrator | 14:49:22.545 STDOUT terraform:  + block_device { 2025-06-03 14:49:22.545889 | orchestrator | 14:49:22.545 STDOUT terraform:  + boot_index = 0 2025-06-03 14:49:22.545922 | orchestrator | 14:49:22.545 STDOUT terraform:  + delete_on_termination = false 2025-06-03 14:49:22.545951 | orchestrator | 14:49:22.545 STDOUT terraform:  + destination_type = "volume" 2025-06-03 14:49:22.545987 | orchestrator | 14:49:22.545 STDOUT terraform:  + multiattach = false 2025-06-03 14:49:22.546030 | orchestrator | 14:49:22.545 STDOUT terraform:  + source_type = "volume" 2025-06-03 14:49:22.546065 | orchestrator | 14:49:22.546 STDOUT terraform:  + uuid = (known after apply) 2025-06-03 14:49:22.546072 | orchestrator | 14:49:22.546 STDOUT terraform:  } 2025-06-03 14:49:22.546090 | orchestrator | 14:49:22.546 STDOUT terraform:  + network { 2025-06-03 14:49:22.546110 | orchestrator | 14:49:22.546 STDOUT terraform:  + access_network = false 2025-06-03 14:49:22.546141 | orchestrator | 14:49:22.546 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-06-03 14:49:22.546202 | orchestrator | 14:49:22.546 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-06-03 14:49:22.546209 | orchestrator | 14:49:22.546 STDOUT terraform:  + mac = (known after apply) 2025-06-03 14:49:22.546421 | orchestrator | 14:49:22.546 STDOUT terraform:  + name = (known after apply) 2025-06-03 14:49:22.546499 | orchestrator | 14:49:22.546 STDOUT terraform:  + port = (known after apply) 2025-06-03 14:49:22.546514 | orchestrator | 14:49:22.546 STDOUT terraform:  + uuid = (known after apply) 2025-06-03 14:49:22.546525 | orchestrator | 14:49:22.546 STDOUT terraform:  } 2025-06-03 14:49:22.546537 | orchestrator | 14:49:22.546 STDOUT terraform:  } 2025-06-03 14:49:22.546559 | orchestrator | 14:49:22.546 STDOUT terraform:  # openstack_compute_instance_v2.node_server[3] will be created 2025-06-03 14:49:22.546575 | orchestrator | 14:49:22.546 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-06-03 14:49:22.546586 | orchestrator | 14:49:22.546 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-06-03 14:49:22.546598 | orchestrator | 14:49:22.546 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-06-03 14:49:22.546625 | orchestrator | 14:49:22.546 STDOUT terraform:  + all_metadata = (known after apply) 2025-06-03 14:49:22.546641 | orchestrator | 14:49:22.546 STDOUT terraform:  + all_tags = (known after apply) 2025-06-03 14:49:22.546652 | orchestrator | 14:49:22.546 STDOUT terraform:  + availability_zone = "nova" 2025-06-03 14:49:22.546663 | orchestrator | 14:49:22.546 STDOUT terraform:  + config_drive = true 2025-06-03 14:49:22.546674 | orchestrator | 14:49:22.546 STDOUT terraform:  + created = (known after apply) 2025-06-03 14:49:22.546688 | orchestrator | 14:49:22.546 STDOUT terraform:  + flavor_id = (known after apply) 2025-06-03 14:49:22.546702 | orchestrator | 14:49:22.546 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-06-03 14:49:22.546717 | orchestrator | 14:49:22.546 STDOUT terraform:  + force_delete = false 2025-06-03 14:49:22.546774 | orchestrator | 14:49:22.546 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-06-03 14:49:22.546791 | orchestrator | 14:49:22.546 STDOUT terraform:  + id = (known after apply) 2025-06-03 14:49:22.546832 | orchestrator | 14:49:22.546 STDOUT terraform:  + image_id = (known after apply) 2025-06-03 14:49:22.546848 | orchestrator | 14:49:22.546 STDOUT terraform:  + image_name = (known after apply) 2025-06-03 14:49:22.546905 | orchestrator | 14:49:22.546 STDOUT terraform:  + key_pair = "testbed" 2025-06-03 14:49:22.546919 | orchestrator | 14:49:22.546 STDOUT terraform:  + name = "testbed-node-3" 2025-06-03 14:49:22.546933 | orchestrator | 14:49:22.546 STDOUT terraform:  + power_state = "active" 2025-06-03 14:49:22.546973 | orchestrator | 14:49:22.546 STDOUT terraform:  + region = (known after apply) 2025-06-03 14:49:22.546989 | orchestrator | 14:49:22.546 STDOUT terraform:  + security_groups = (known after apply) 2025-06-03 14:49:22.547040 | orchestrator | 14:49:22.546 STDOUT terraform:  + stop_before_destroy = false 2025-06-03 14:49:22.547080 | orchestrator | 14:49:22.547 STDOUT terraform:  + updated = (known after apply) 2025-06-03 14:49:22.547096 | orchestrator | 14:49:22.547 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-06-03 14:49:22.547111 | orchestrator | 14:49:22.547 STDOUT terraform:  + block_device { 2025-06-03 14:49:22.547125 | orchestrator | 14:49:22.547 STDOUT terraform:  + boot_index = 0 2025-06-03 14:49:22.547168 | orchestrator | 14:49:22.547 STDOUT terraform:  + delete_on_termination = false 2025-06-03 14:49:22.547185 | orchestrator | 14:49:22.547 STDOUT terraform:  + destination_type = "volume" 2025-06-03 14:49:22.547199 | orchestrator | 14:49:22.547 STDOUT terraform:  + multiattach = false 2025-06-03 14:49:22.547277 | orchestrator | 14:49:22.547 STDOUT terraform:  + source_type = "volume" 2025-06-03 14:49:22.547295 | orchestrator | 14:49:22.547 STDOUT terraform:  + uuid = (known after apply) 2025-06-03 14:49:22.547309 | orchestrator | 14:49:22.547 STDOUT terraform:  } 2025-06-03 14:49:22.547321 | orchestrator | 14:49:22.547 STDOUT terraform:  + network { 2025-06-03 14:49:22.547335 | orchestrator | 14:49:22.547 STDOUT terraform:  + access_network = false 2025-06-03 14:49:22.547386 | orchestrator | 14:49:22.547 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-06-03 14:49:22.547402 | orchestrator | 14:49:22.547 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-06-03 14:49:22.547455 | orchestrator | 14:49:22.547 STDOUT terraform:  + mac = (known after apply) 2025-06-03 14:49:22.547472 | orchestrator | 14:49:22.547 STDOUT terraform:  + name = (known after apply) 2025-06-03 14:49:22.547510 | orchestrator | 14:49:22.547 STDOUT terraform:  + port = (known after apply) 2025-06-03 14:49:22.547526 | orchestrator | 14:49:22.547 STDOUT terraform:  + uuid = (known after apply) 2025-06-03 14:49:22.547541 | orchestrator | 14:49:22.547 STDOUT terraform:  } 2025-06-03 14:49:22.547552 | orchestrator | 14:49:22.547 STDOUT terraform:  } 2025-06-03 14:49:22.547676 | orchestrator | 14:49:22.547 STDOUT terraform:  # openstack_compute_instance_v2.node_server[4] will be created 2025-06-03 14:49:22.547699 | orchestrator | 14:49:22.547 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-06-03 14:49:22.547707 | orchestrator | 14:49:22.547 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-06-03 14:49:22.547712 | orchestrator | 14:49:22.547 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-06-03 14:49:22.547740 | orchestrator | 14:49:22.547 STDOUT terraform:  + all_metadata = (known after apply) 2025-06-03 14:49:22.547779 | orchestrator | 14:49:22.547 STDOUT terraform:  + all_tags = (known after apply) 2025-06-03 14:49:22.547803 | orchestrator | 14:49:22.547 STDOUT terraform:  + availability_zone = "nova" 2025-06-03 14:49:22.547822 | orchestrator | 14:49:22.547 STDOUT terraform:  + config_drive = true 2025-06-03 14:49:22.547859 | orchestrator | 14:49:22.547 STDOUT terraform:  + created = (known after apply) 2025-06-03 14:49:22.547893 | orchestrator | 14:49:22.547 STDOUT terraform:  + flavor_id = (known after apply) 2025-06-03 14:49:22.547917 | orchestrator | 14:49:22.547 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-06-03 14:49:22.547944 | orchestrator | 14:49:22.547 STDOUT terraform:  + force_delete = false 2025-06-03 14:49:22.547978 | orchestrator | 14:49:22.547 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-06-03 14:49:22.548014 | orchestrator | 14:49:22.547 STDOUT terraform:  + id = (known after apply) 2025-06-03 14:49:22.548050 | orchestrator | 14:49:22.548 STDOUT terraform:  + image_id = (known after apply) 2025-06-03 14:49:22.548089 | orchestrator | 14:49:22.548 STDOUT terraform:  + image_name = (known after apply) 2025-06-03 14:49:22.548114 | orchestrator | 14:49:22.548 STDOUT terraform:  + key_pair = "testbed" 2025-06-03 14:49:22.548143 | orchestrator | 14:49:22.548 STDOUT terraform:  + name = "testbed-node-4" 2025-06-03 14:49:22.548168 | orchestrator | 14:49:22.548 STDOUT terraform:  + power_state = "active" 2025-06-03 14:49:22.548202 | orchestrator | 14:49:22.548 STDOUT terraform:  + region = (known after apply) 2025-06-03 14:49:22.548262 | orchestrator | 14:49:22.548 STDOUT terraform:  + security_groups = (known after apply) 2025-06-03 14:49:22.548299 | orchestrator | 14:49:22.548 STDOUT terraform:  + stop_before_destroy = false 2025-06-03 14:49:22.548337 | orchestrator | 14:49:22.548 STDOUT terraform:  + updated = (known after apply) 2025-06-03 14:49:22.548389 | orchestrator | 14:49:22.548 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-06-03 14:49:22.548396 | orchestrator | 14:49:22.548 STDOUT terraform:  + block_device { 2025-06-03 14:49:22.548425 | orchestrator | 14:49:22.548 STDOUT terraform:  + boot_index = 0 2025-06-03 14:49:22.548459 | orchestrator | 14:49:22.548 STDOUT terraform:  + delete_on_termination = false 2025-06-03 14:49:22.548488 | orchestrator | 14:49:22.548 STDOUT terraform:  + destination_type = "volume" 2025-06-03 14:49:22.548526 | orchestrator | 14:49:22.548 STDOUT terraform:  + multiattach = false 2025-06-03 14:49:22.548551 | orchestrator | 14:49:22.548 STDOUT terraform:  + source_type = "volume" 2025-06-03 14:49:22.548589 | orchestrator | 14:49:22.548 STDOUT terraform:  + uuid = (known after apply) 2025-06-03 14:49:22.548596 | orchestrator | 14:49:22.548 STDOUT terraform:  } 2025-06-03 14:49:22.548603 | orchestrator | 14:49:22.548 STDOUT terraform:  + network { 2025-06-03 14:49:22.548630 | orchestrator | 14:49:22.548 STDOUT terraform:  + access_network = false 2025-06-03 14:49:22.548661 | orchestrator | 14:49:22.548 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-06-03 14:49:22.548699 | orchestrator | 14:49:22.548 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-06-03 14:49:22.548754 | orchestrator | 14:49:22.548 STDOUT terraform:  + mac = (known after apply) 2025-06-03 14:49:22.548785 | orchestrator | 14:49:22.548 STDOUT terraform:  + name = (known after apply) 2025-06-03 14:49:22.548815 | orchestrator | 14:49:22.548 STDOUT terraform:  + port = (known after apply) 2025-06-03 14:49:22.548846 | orchestrator | 14:49:22.548 STDOUT terraform:  + uuid = (known after apply) 2025-06-03 14:49:22.548856 | orchestrator | 14:49:22.548 STDOUT terraform:  } 2025-06-03 14:49:22.548861 | orchestrator | 14:49:22.548 STDOUT terraform:  } 2025-06-03 14:49:22.548908 | orchestrator | 14:49:22.548 STDOUT terraform:  # openstack_compute_instance_v2.node_server[5] will be created 2025-06-03 14:49:22.548951 | orchestrator | 14:49:22.548 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-06-03 14:49:22.548991 | orchestrator | 14:49:22.548 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-06-03 14:49:22.549025 | orchestrator | 14:49:22.548 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-06-03 14:49:22.549059 | orchestrator | 14:49:22.549 STDOUT terraform:  + all_metadata = (known after apply) 2025-06-03 14:49:22.549099 | orchestrator | 14:49:22.549 STDOUT terraform:  + all_tags = (known after apply) 2025-06-03 14:49:22.549122 | orchestrator | 14:49:22.549 STDOUT terraform:  + availability_zone = "nova" 2025-06-03 14:49:22.549158 | orchestrator | 14:49:22.549 STDOUT terraform:  + config_drive = true 2025-06-03 14:49:22.549200 | orchestrator | 14:49:22.549 STDOUT terraform:  + created = (known after apply) 2025-06-03 14:49:22.549264 | orchestrator | 14:49:22.549 STDOUT terraform:  + flavor_id = (known after apply) 2025-06-03 14:49:22.549295 | orchestrator | 14:49:22.549 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-06-03 14:49:22.549320 | orchestrator | 14:49:22.549 STDOUT terraform:  + force_delete = false 2025-06-03 14:49:22.549354 | orchestrator | 14:49:22.549 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-06-03 14:49:22.549390 | orchestrator | 14:49:22.549 STDOUT terraform:  + id = (known after apply) 2025-06-03 14:49:22.549436 | orchestrator | 14:49:22.549 STDOUT terraform:  + image_id = (known after apply) 2025-06-03 14:49:22.549473 | orchestrator | 14:49:22.549 STDOUT terraform:  + image_name = (known after apply) 2025-06-03 14:49:22.549497 | orchestrator | 14:49:22.549 STDOUT terraform:  + key_pair = "testbed" 2025-06-03 14:49:22.549527 | orchestrator | 14:49:22.549 STDOUT terraform:  + name = "testbed-node-5" 2025-06-03 14:49:22.549552 | orchestrator | 14:49:22.549 STDOUT terraform:  + power_state = "active" 2025-06-03 14:49:22.549586 | orchestrator | 14:49:22.549 STDOUT terraform:  + region = (known after apply) 2025-06-03 14:49:22.549623 | orchestrator | 14:49:22.549 STDOUT terraform:  + security_groups = (known after apply) 2025-06-03 14:49:22.549649 | orchestrator | 14:49:22.549 STDOUT terraform:  + stop_before_destroy = false 2025-06-03 14:49:22.549684 | orchestrator | 14:49:22.549 STDOUT terraform:  + updated = (known after apply) 2025-06-03 14:49:22.549733 | orchestrator | 14:49:22.549 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-06-03 14:49:22.549750 | orchestrator | 14:49:22.549 STDOUT terraform:  + block_device { 2025-06-03 14:49:22.549773 | orchestrator | 14:49:22.549 STDOUT terraform:  + boot_index = 0 2025-06-03 14:49:22.549808 | orchestrator | 14:49:22.549 STDOUT terraform:  + delete_on_termination = false 2025-06-03 14:49:22.549837 | orchestrator | 14:49:22.549 STDOUT terraform:  + destination_type = "volume" 2025-06-03 14:49:22.549868 | orchestrator | 14:49:22.549 STDOUT terraform:  + multiattach = false 2025-06-03 14:49:22.549900 | orchestrator | 14:49:22.549 STDOUT terraform:  + source_type = "volume" 2025-06-03 14:49:22.549938 | orchestrator | 14:49:22.549 STDOUT terraform:  + uuid = (known after apply) 2025-06-03 14:49:22.549944 | orchestrator | 14:49:22.549 STDOUT terraform:  } 2025-06-03 14:49:22.549960 | orchestrator | 14:49:22.549 STDOUT terraform:  + network { 2025-06-03 14:49:22.549983 | orchestrator | 14:49:22.549 STDOUT terraform:  + access_network = false 2025-06-03 14:49:22.550045 | orchestrator | 14:49:22.549 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-06-03 14:49:22.550085 | orchestrator | 14:49:22.550 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-06-03 14:49:22.550128 | orchestrator | 14:49:22.550 STDOUT terraform:  + mac = (known after apply) 2025-06-03 14:49:22.550159 | orchestrator | 14:49:22.550 STDOUT terraform:  + name = (known after apply) 2025-06-03 14:49:22.550190 | orchestrator | 14:49:22.550 STDOUT terraform:  + port = (known after apply) 2025-06-03 14:49:22.550235 | orchestrator | 14:49:22.550 STDOUT terraform:  + uuid = (known after apply) 2025-06-03 14:49:22.550246 | orchestrator | 14:49:22.550 STDOUT terraform:  } 2025-06-03 14:49:22.550251 | orchestrator | 14:49:22.550 STDOUT terraform:  } 2025-06-03 14:49:22.550285 | orchestrator | 14:49:22.550 STDOUT terraform:  # openstack_compute_keypair_v2.key will be created 2025-06-03 14:49:22.550319 | orchestrator | 14:49:22.550 STDOUT terraform:  + resource "openstack_compute_keypair_v2" "key" { 2025-06-03 14:49:22.550367 | orchestrator | 14:49:22.550 STDOUT terraform:  + fingerprint = (known after apply) 2025-06-03 14:49:22.550396 | orchestrator | 14:49:22.550 STDOUT terraform:  + id = (known after apply) 2025-06-03 14:49:22.550419 | orchestrator | 14:49:22.550 STDOUT terraform:  + name = "testbed" 2025-06-03 14:49:22.550448 | orchestrator | 14:49:22.550 STDOUT terraform:  + private_key = (sensitive value) 2025-06-03 14:49:22.550477 | orchestrator | 14:49:22.550 STDOUT terraform:  + public_key = (known after apply) 2025-06-03 14:49:22.550508 | orchestrator | 14:49:22.550 STDOUT terraform:  + region = (known after apply) 2025-06-03 14:49:22.550538 | orchestrator | 14:49:22.550 STDOUT terraform:  + user_id = (known after apply) 2025-06-03 14:49:22.550544 | orchestrator | 14:49:22.550 STDOUT terraform:  } 2025-06-03 14:49:22.550596 | orchestrator | 14:49:22.550 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2025-06-03 14:49:22.550649 | orchestrator | 14:49:22.550 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-03 14:49:22.550677 | orchestrator | 14:49:22.550 STDOUT terraform:  + device = (known after apply) 2025-06-03 14:49:22.550706 | orchestrator | 14:49:22.550 STDOUT terraform:  + id = (known after apply) 2025-06-03 14:49:22.550736 | orchestrator | 14:49:22.550 STDOUT terraform:  + instance_id = (known after apply) 2025-06-03 14:49:22.550770 | orchestrator | 14:49:22.550 STDOUT terraform:  + region = (known after apply) 2025-06-03 14:49:22.550798 | orchestrator | 14:49:22.550 STDOUT terraform:  + volume_id = (known after apply) 2025-06-03 14:49:22.550808 | orchestrator | 14:49:22.550 STDOUT terraform:  } 2025-06-03 14:49:22.550858 | orchestrator | 14:49:22.550 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2025-06-03 14:49:22.550909 | orchestrator | 14:49:22.550 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-03 14:49:22.550934 | orchestrator | 14:49:22.550 STDOUT terraform:  + device = (known after apply) 2025-06-03 14:49:22.550972 | orchestrator | 14:49:22.550 STDOUT terraform:  + id = (known after apply) 2025-06-03 14:49:22.551004 | orchestrator | 14:49:22.550 STDOUT terraform:  + instance_id = (known after apply) 2025-06-03 14:49:22.551037 | orchestrator | 14:49:22.551 STDOUT terraform:  + region = (known after apply) 2025-06-03 14:49:22.554894 | orchestrator | 14:49:22.551 STDOUT terraform:  + volume_id = (known after apply) 2025-06-03 14:49:22.554933 | orchestrator | 14:49:22.551 STDOUT terraform:  } 2025-06-03 14:49:22.555008 | orchestrator | 14:49:22.551 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2025-06-03 14:49:22.555017 | orchestrator | 14:49:22.554 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-03 14:49:22.555093 | orchestrator | 14:49:22.555 STDOUT terraform:  + device = (known after apply) 2025-06-03 14:49:22.555100 | orchestrator | 14:49:22.555 STDOUT terraform:  + id = (known after apply) 2025-06-03 14:49:22.555106 | orchestrator | 14:49:22.555 STDOUT terraform:  + instance_id = (known after apply) 2025-06-03 14:49:22.555138 | orchestrator | 14:49:22.555 STDOUT terraform:  + region = (known after apply) 2025-06-03 14:49:22.556177 | orchestrator | 14:49:22.555 STDOUT terraform:  + volume_id = (known after apply) 2025-06-03 14:49:22.556232 | orchestrator | 14:49:22.555 STDOUT terraform:  } 2025-06-03 14:49:22.556238 | orchestrator | 14:49:22.555 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2025-06-03 14:49:22.556243 | orchestrator | 14:49:22.555 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-03 14:49:22.556247 | orchestrator | 14:49:22.555 STDOUT terraform:  + device = (known after apply) 2025-06-03 14:49:22.556251 | orchestrator | 14:49:22.555 STDOUT terraform:  + id = (known after apply) 2025-06-03 14:49:22.556255 | orchestrator | 14:49:22.555 STDOUT terraform:  + instance_id = (known after apply) 2025-06-03 14:49:22.556259 | orchestrator | 14:49:22.555 STDOUT terraform:  + region = (known after apply) 2025-06-03 14:49:22.556263 | orchestrator | 14:49:22.555 STDOUT terraform:  + volume_id = (known after apply) 2025-06-03 14:49:22.556267 | orchestrator | 14:49:22.555 STDOUT terraform:  } 2025-06-03 14:49:22.556274 | orchestrator | 14:49:22.555 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2025-06-03 14:49:22.556278 | orchestrator | 14:49:22.555 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-03 14:49:22.556281 | orchestrator | 14:49:22.555 STDOUT terraform:  + device = (known after apply) 2025-06-03 14:49:22.556286 | orchestrator | 14:49:22.555 STDOUT terraform:  + id = (known after apply) 2025-06-03 14:49:22.556299 | orchestrator | 14:49:22.555 STDOUT terraform:  + instance_id = (known after apply) 2025-06-03 14:49:22.556303 | orchestrator | 14:49:22.555 STDOUT terraform:  + region = (known after apply) 2025-06-03 14:49:22.556311 | orchestrator | 14:49:22.555 STDOUT terraform:  + volume_id = (known after apply) 2025-06-03 14:49:22.556315 | orchestrator | 14:49:22.555 STDOUT terraform:  } 2025-06-03 14:49:22.556319 | orchestrator | 14:49:22.555 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2025-06-03 14:49:22.556323 | orchestrator | 14:49:22.555 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-03 14:49:22.556327 | orchestrator | 14:49:22.555 STDOUT terraform:  + device = (known after apply) 2025-06-03 14:49:22.556330 | orchestrator | 14:49:22.555 STDOUT terraform:  + id = (known after apply) 2025-06-03 14:49:22.556334 | orchestrator | 14:49:22.555 STDOUT terraform:  + instance_id = (known after apply) 2025-06-03 14:49:22.556338 | orchestrator | 14:49:22.555 STDOUT terraform:  + region = (known after apply) 2025-06-03 14:49:22.556342 | orchestrator | 14:49:22.555 STDOUT terraform:  + volume_id = (known after apply) 2025-06-03 14:49:22.556346 | orchestrator | 14:49:22.555 STDOUT terraform:  } 2025-06-03 14:49:22.556349 | orchestrator | 14:49:22.555 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2025-06-03 14:49:22.556353 | orchestrator | 14:49:22.556 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-03 14:49:22.556357 | orchestrator | 14:49:22.556 STDOUT terraform:  + device = (known after apply) 2025-06-03 14:49:22.556361 | orchestrator | 14:49:22.556 STDOUT terraform:  + id = (known after apply) 2025-06-03 14:49:22.556364 | orchestrator | 14:49:22.556 STDOUT terraform:  + instance_id = (known after apply) 2025-06-03 14:49:22.556368 | orchestrator | 14:49:22.556 STDOUT terraform:  + region = (known after apply) 2025-06-03 14:49:22.556377 | orchestrator | 14:49:22.556 STDOUT terraform:  + volume_id = (known after apply) 2025-06-03 14:49:22.556381 | orchestrator | 14:49:22.556 STDOUT terraform:  } 2025-06-03 14:49:22.556385 | orchestrator | 14:49:22.556 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2025-06-03 14:49:22.556389 | orchestrator | 14:49:22.556 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-03 14:49:22.556393 | orchestrator | 14:49:22.556 STDOUT terraform:  + device = (known after apply) 2025-06-03 14:49:22.556396 | orchestrator | 14:49:22.556 STDOUT terraform:  + id = (known after apply) 2025-06-03 14:49:22.556400 | orchestrator | 14:49:22.556 STDOUT terraform:  + instance_id = (known after apply) 2025-06-03 14:49:22.556406 | orchestrator | 14:49:22.556 STDOUT terraform:  + region = (known after apply) 2025-06-03 14:49:22.556985 | orchestrator | 14:49:22.556 STDOUT terraform:  + volume_id = (known after apply) 2025-06-03 14:49:22.556991 | orchestrator | 14:49:22.556 STDOUT terraform:  } 2025-06-03 14:49:22.556995 | orchestrator | 14:49:22.556 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2025-06-03 14:49:22.557073 | orchestrator | 14:49:22.556 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-03 14:49:22.557077 | orchestrator | 14:49:22.556 STDOUT terraform:  + device = (known after apply) 2025-06-03 14:49:22.557081 | orchestrator | 14:49:22.556 STDOUT terraform:  + id = (known after apply) 2025-06-03 14:49:22.557085 | orchestrator | 14:49:22.556 STDOUT terraform:  + instance_id = (known after apply) 2025-06-03 14:49:22.557089 | orchestrator | 14:49:22.556 STDOUT terraform:  + region = (known after apply) 2025-06-03 14:49:22.557092 | orchestrator | 14:49:22.556 STDOUT terraform:  + volume_id = (known after apply) 2025-06-03 14:49:22.557096 | orchestrator | 14:49:22.556 STDOUT terraform:  } 2025-06-03 14:49:22.557100 | orchestrator | 14:49:22.556 STDOUT terraform:  # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2025-06-03 14:49:22.557105 | orchestrator | 14:49:22.556 STDOUT terraform:  + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2025-06-03 14:49:22.557109 | orchestrator | 14:49:22.556 STDOUT terraform:  + fixed_ip = (known after apply) 2025-06-03 14:49:22.557113 | orchestrator | 14:49:22.556 STDOUT terraform:  + floating_ip = (known after apply) 2025-06-03 14:49:22.557117 | orchestrator | 14:49:22.556 STDOUT terraform:  + id = (known after apply) 2025-06-03 14:49:22.557121 | orchestrator | 14:49:22.556 STDOUT terraform:  + port_id = (known after apply) 2025-06-03 14:49:22.557125 | orchestrator | 14:49:22.556 STDOUT terraform:  + region = (known after apply) 2025-06-03 14:49:22.557128 | orchestrator | 14:49:22.556 STDOUT terraform:  } 2025-06-03 14:49:22.557134 | orchestrator | 14:49:22.556 STDOUT terraform:  # openstack_networking_floatingip_v2.manager_floating_ip will be created 2025-06-03 14:49:22.557139 | orchestrator | 14:49:22.557 STDOUT terraform:  + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2025-06-03 14:49:22.557143 | orchestrator | 14:49:22.557 STDOUT terraform:  + address = (known after apply) 2025-06-03 14:49:22.557147 | orchestrator | 14:49:22.557 STDOUT terraform:  + all_tags = (known after apply) 2025-06-03 14:49:22.557151 | orchestrator | 14:49:22.557 STDOUT terraform:  + dns_domain = (known after apply) 2025-06-03 14:49:22.557156 | orchestrator | 14:49:22.557 STDOUT terraform:  + dns_name = (known after apply) 2025-06-03 14:49:22.557183 | orchestrator | 14:49:22.557 STDOUT terraform:  + fixed_ip = (known after apply) 2025-06-03 14:49:22.557190 | orchestrator | 14:49:22.557 STDOUT terraform:  + id = (known after apply) 2025-06-03 14:49:22.557253 | orchestrator | 14:49:22.557 STDOUT terraform:  + pool = "public" 2025-06-03 14:49:22.557260 | orchestrator | 14:49:22.557 STDOUT terraform:  + port_id = (known after apply) 2025-06-03 14:49:22.557279 | orchestrator | 14:49:22.557 STDOUT terraform:  + region = (known after apply) 2025-06-03 14:49:22.557304 | orchestrator | 14:49:22.557 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-03 14:49:22.557329 | orchestrator | 14:49:22.557 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-03 14:49:22.557334 | orchestrator | 14:49:22.557 STDOUT terraform:  } 2025-06-03 14:49:22.557389 | orchestrator | 14:49:22.557 STDOUT terraform:  # openstack_networking_network_v2.net_management will be created 2025-06-03 14:49:22.557442 | orchestrator | 14:49:22.557 STDOUT terraform:  + resource "openstack_networking_network_v2" "net_management" { 2025-06-03 14:49:22.557479 | orchestrator | 14:49:22.557 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-03 14:49:22.557516 | orchestrator | 14:49:22.557 STDOUT terraform:  + all_tags = (known after apply) 2025-06-03 14:49:22.557535 | orchestrator | 14:49:22.557 STDOUT terraform:  + availability_zone_hints = [ 2025-06-03 14:49:22.557541 | orchestrator | 14:49:22.557 STDOUT terraform:  + "nova", 2025-06-03 14:49:22.557565 | orchestrator | 14:49:22.557 STDOUT terraform:  ] 2025-06-03 14:49:22.557601 | orchestrator | 14:49:22.557 STDOUT terraform:  + dns_domain = (known after apply) 2025-06-03 14:49:22.557640 | orchestrator | 14:49:22.557 STDOUT terraform:  + external = (known after apply) 2025-06-03 14:49:22.557676 | orchestrator | 14:49:22.557 STDOUT terraform:  + id = (known after apply) 2025-06-03 14:49:22.557713 | orchestrator | 14:49:22.557 STDOUT terraform:  + mtu = (known after apply) 2025-06-03 14:49:22.557755 | orchestrator | 14:49:22.557 STDOUT terraform:  + name = "net-testbed-management" 2025-06-03 14:49:22.557791 | orchestrator | 14:49:22.557 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-03 14:49:22.557829 | orchestrator | 14:49:22.557 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-03 14:49:22.557866 | orchestrator | 14:49:22.557 STDOUT terraform:  + region = (known after apply) 2025-06-03 14:49:22.557904 | orchestrator | 14:49:22.557 STDOUT terraform:  + shared = (known after apply) 2025-06-03 14:49:22.557950 | orchestrator | 14:49:22.557 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-03 14:49:22.557985 | orchestrator | 14:49:22.557 STDOUT terraform:  + transparent_vlan = (known after apply) 2025-06-03 14:49:22.558006 | orchestrator | 14:49:22.557 STDOUT terraform:  + segments (known after apply) 2025-06-03 14:49:22.558030 | orchestrator | 14:49:22.557 STDOUT terraform:  } 2025-06-03 14:49:22.558077 | orchestrator | 14:49:22.558 STDOUT terraform:  # openstack_networking_port_v2.manager_port_management will be created 2025-06-03 14:49:22.558122 | orchestrator | 14:49:22.558 STDOUT terraform:  + resource "openstack_networking_port_v2" "manager_port_management" { 2025-06-03 14:49:22.558158 | orchestrator | 14:49:22.558 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-03 14:49:22.558195 | orchestrator | 14:49:22.558 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-06-03 14:49:22.558262 | orchestrator | 14:49:22.558 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-06-03 14:49:22.558299 | orchestrator | 14:49:22.558 STDOUT terraform:  + all_tags = (known after apply) 2025-06-03 14:49:22.558344 | orchestrator | 14:49:22.558 STDOUT terraform:  + device_id = (known after apply) 2025-06-03 14:49:22.558384 | orchestrator | 14:49:22.558 STDOUT terraform:  + device_owner = (known after apply) 2025-06-03 14:49:22.558420 | orchestrator | 14:49:22.558 STDOUT terraform:  + dns_assignment = (known after apply) 2025-06-03 14:49:22.558469 | orchestrator | 14:49:22.558 STDOUT terraform:  + dns_name = (known after apply) 2025-06-03 14:49:22.558511 | orchestrator | 14:49:22.558 STDOUT terraform:  + id = (known after apply) 2025-06-03 14:49:22.558547 | orchestrator | 14:49:22.558 STDOUT terraform:  + mac_address = (known after apply) 2025-06-03 14:49:22.558583 | orchestrator | 14:49:22.558 STDOUT terraform:  + network_id = (known after apply) 2025-06-03 14:49:22.558617 | orchestrator | 14:49:22.558 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-03 14:49:22.558656 | orchestrator | 14:49:22.558 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-03 14:49:22.558692 | orchestrator | 14:49:22.558 STDOUT terraform:  + region = (known after apply) 2025-06-03 14:49:22.558731 | orchestrator | 14:49:22.558 STDOUT terraform:  + security_group_ids = (known after apply) 2025-06-03 14:49:22.558776 | orchestrator | 14:49:22.558 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-03 14:49:22.558800 | orchestrator | 14:49:22.558 STDOUT terraform:  + allowed_address_pairs { 2025-06-03 14:49:22.558832 | orchestrator | 14:49:22.558 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-06-03 14:49:22.558838 | orchestrator | 14:49:22.558 STDOUT terraform:  } 2025-06-03 14:49:22.558861 | orchestrator | 14:49:22.558 STDOUT terraform:  + allowed_address_pairs { 2025-06-03 14:49:22.558889 | orchestrator | 14:49:22.558 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-06-03 14:49:22.558895 | orchestrator | 14:49:22.558 STDOUT terraform:  } 2025-06-03 14:49:22.558924 | orchestrator | 14:49:22.558 STDOUT terraform:  + binding (known after apply) 2025-06-03 14:49:22.558930 | orchestrator | 14:49:22.558 STDOUT terraform:  + fixed_ip { 2025-06-03 14:49:22.558958 | orchestrator | 14:49:22.558 STDOUT terraform:  + ip_address = "192.168.16.5" 2025-06-03 14:49:22.558990 | orchestrator | 14:49:22.558 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-03 14:49:22.559009 | orchestrator | 14:49:22.558 STDOUT terraform:  } 2025-06-03 14:49:22.559013 | orchestrator | 14:49:22.558 STDOUT terraform:  } 2025-06-03 14:49:22.559062 | orchestrator | 14:49:22.559 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[0] will be created 2025-06-03 14:49:22.559106 | orchestrator | 14:49:22.559 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-06-03 14:49:22.559142 | orchestrator | 14:49:22.559 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-03 14:49:22.559177 | orchestrator | 14:49:22.559 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-06-03 14:49:22.559252 | orchestrator | 14:49:22.559 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-06-03 14:49:22.559257 | orchestrator | 14:49:22.559 STDOUT terraform:  + all_tags = (known after apply) 2025-06-03 14:49:22.559373 | orchestrator | 14:49:22.559 STDOUT terraform:  + device_id = (known after apply) 2025-06-03 14:49:22.559411 | orchestrator | 14:49:22.559 STDOUT terraform:  + device_owner = (known after apply) 2025-06-03 14:49:22.559422 | orchestrator | 14:49:22.559 STDOUT terraform:  + dns_assignment = (known after apply) 2025-06-03 14:49:22.559437 | orchestrator | 14:49:22.559 STDOUT terraform:  + dns_name = (known after apply) 2025-06-03 14:49:22.559443 | orchestrator | 14:49:22.559 STDOUT terraform:  + id = (known after apply) 2025-06-03 14:49:22.559481 | orchestrator | 14:49:22.559 STDOUT terraform:  + mac_address = (known after apply) 2025-06-03 14:49:22.559516 | orchestrator | 14:49:22.559 STDOUT terraform:  + network_id = (known after apply) 2025-06-03 14:49:22.559555 | orchestrator | 14:49:22.559 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-03 14:49:22.559590 | orchestrator | 14:49:22.559 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-03 14:49:22.559630 | orchestrator | 14:49:22.559 STDOUT terraform:  + region = (known after apply) 2025-06-03 14:49:22.559669 | orchestrator | 14:49:22.559 STDOUT terraform:  + security_group_ids = (known after apply) 2025-06-03 14:49:22.559705 | orchestrator | 14:49:22.559 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-03 14:49:22.559715 | orchestrator | 14:49:22.559 STDOUT terraform:  + allowed_address_pairs { 2025-06-03 14:49:22.559757 | orchestrator | 14:49:22.559 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-06-03 14:49:22.559766 | orchestrator | 14:49:22.559 STDOUT terraform:  } 2025-06-03 14:49:22.559775 | orchestrator | 14:49:22.559 STDOUT terraform:  + allowed_address_pairs { 2025-06-03 14:49:22.559811 | orchestrator | 14:49:22.559 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-06-03 14:49:22.559818 | orchestrator | 14:49:22.559 STDOUT terraform:  } 2025-06-03 14:49:22.559827 | orchestrator | 14:49:22.559 STDOUT terraform:  + allowed_address_pairs { 2025-06-03 14:49:22.559863 | orchestrator | 14:49:22.559 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-06-03 14:49:22.559871 | orchestrator | 14:49:22.559 STDOUT terraform:  } 2025-06-03 14:49:22.559879 | orchestrator | 14:49:22.559 STDOUT terraform:  + allowed_address_pairs { 2025-06-03 14:49:22.559925 | orchestrator | 14:49:22.559 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-06-03 14:49:22.559935 | orchestrator | 14:49:22.559 STDOUT terraform:  } 2025-06-03 14:49:22.559962 | orchestrator | 14:49:22.559 STDOUT terraform:  + binding (known after apply) 2025-06-03 14:49:22.559969 | orchestrator | 14:49:22.559 STDOUT terraform:  + fixed_ip { 2025-06-03 14:49:22.559978 | orchestrator | 14:49:22.559 STDOUT terraform:  + ip_address = "192.168.16.10" 2025-06-03 14:49:22.560021 | orchestrator | 14:49:22.559 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-03 14:49:22.560029 | orchestrator | 14:49:22.560 STDOUT terraform:  } 2025-06-03 14:49:22.560038 | orchestrator | 14:49:22.560 STDOUT terraform:  } 2025-06-03 14:49:22.560101 | orchestrator | 14:49:22.560 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[1] will be created 2025-06-03 14:49:22.560145 | orchestrator | 14:49:22.560 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-06-03 14:49:22.560181 | orchestrator | 14:49:22.560 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-03 14:49:22.560247 | orchestrator | 14:49:22.560 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-06-03 14:49:22.560284 | orchestrator | 14:49:22.560 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-06-03 14:49:22.560330 | orchestrator | 14:49:22.560 STDOUT terraform:  + all_tags = (known after apply) 2025-06-03 14:49:22.560360 | orchestrator | 14:49:22.560 STDOUT terraform:  + device_id = (known after apply) 2025-06-03 14:49:22.560397 | orchestrator | 14:49:22.560 STDOUT terraform:  + device_owner = (known after apply) 2025-06-03 14:49:22.560438 | orchestrator | 14:49:22.560 STDOUT terraform:  + dns_assignment = (known after apply) 2025-06-03 14:49:22.560474 | orchestrator | 14:49:22.560 STDOUT terraform:  + dns_name = (known after apply) 2025-06-03 14:49:22.560511 | orchestrator | 14:49:22.560 STDOUT terraform:  + id = (known after apply) 2025-06-03 14:49:22.560573 | orchestrator | 14:49:22.560 STDOUT terraform:  + mac_address = (known after apply) 2025-06-03 14:49:22.560582 | orchestrator | 14:49:22.560 STDOUT terraform:  + network_id = (known after apply) 2025-06-03 14:49:22.560620 | orchestrator | 14:49:22.560 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-03 14:49:22.560661 | orchestrator | 14:49:22.560 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-03 14:49:22.560699 | orchestrator | 14:49:22.560 STDOUT terraform:  + region = (known after apply) 2025-06-03 14:49:22.560735 | orchestrator | 14:49:22.560 STDOUT terraform:  + security_group_ids = (known after apply) 2025-06-03 14:49:22.560770 | orchestrator | 14:49:22.560 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-03 14:49:22.560780 | orchestrator | 14:49:22.560 STDOUT terraform:  + allowed_address_pairs { 2025-06-03 14:49:22.560814 | orchestrator | 14:49:22.560 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-06-03 14:49:22.560822 | orchestrator | 14:49:22.560 STDOUT terraform:  } 2025-06-03 14:49:22.560830 | orchestrator | 14:49:22.560 STDOUT terraform:  + allowed_address_pairs { 2025-06-03 14:49:22.560868 | orchestrator | 14:49:22.560 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-06-03 14:49:22.560876 | orchestrator | 14:49:22.560 STDOUT terraform:  } 2025-06-03 14:49:22.560885 | orchestrator | 14:49:22.560 STDOUT terraform:  + allowed_address_pairs { 2025-06-03 14:49:22.560929 | orchestrator | 14:49:22.560 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-06-03 14:49:22.560937 | orchestrator | 14:49:22.560 STDOUT terraform:  } 2025-06-03 14:49:22.560971 | orchestrator | 14:49:22.560 STDOUT terraform:  + allowed_address_pairs { 2025-06-03 14:49:22.560981 | orchestrator | 14:49:22.560 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-06-03 14:49:22.561018 | orchestrator | 14:49:22.560 STDOUT terraform:  } 2025-06-03 14:49:22.561028 | orchestrator | 14:49:22.561 STDOUT terraform:  + binding (known after apply) 2025-06-03 14:49:22.561055 | orchestrator | 14:49:22.561 STDOUT terraform:  + fixed_ip { 2025-06-03 14:49:22.561065 | orchestrator | 14:49:22.561 STDOUT terraform:  + ip_address = "192.168.16.11" 2025-06-03 14:49:22.561112 | orchestrator | 14:49:22.561 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-03 14:49:22.561120 | orchestrator | 14:49:22.561 STDOUT terraform:  } 2025-06-03 14:49:22.561134 | orchestrator | 14:49:22.561 STDOUT terraform:  } 2025-06-03 14:49:22.561180 | orchestrator | 14:49:22.561 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[2] will be created 2025-06-03 14:49:22.561238 | orchestrator | 14:49:22.561 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-06-03 14:49:22.561274 | orchestrator | 14:49:22.561 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-03 14:49:22.561309 | orchestrator | 14:49:22.561 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-06-03 14:49:22.561344 | orchestrator | 14:49:22.561 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-06-03 14:49:22.561385 | orchestrator | 14:49:22.561 STDOUT terraform:  + all_tags = (known after apply) 2025-06-03 14:49:22.561420 | orchestrator | 14:49:22.561 STDOUT terraform:  + device_id = (known after apply) 2025-06-03 14:49:22.561456 | orchestrator | 14:49:22.561 STDOUT terraform:  + device_owner = (known after apply) 2025-06-03 14:49:22.561495 | orchestrator | 14:49:22.561 STDOUT terraform:  + dns_assignment = (known after apply) 2025-06-03 14:49:22.561533 | orchestrator | 14:49:22.561 STDOUT terraform:  + dns_name = (known after apply) 2025-06-03 14:49:22.561572 | orchestrator | 14:49:22.561 STDOUT terraform:  + id = (known after apply) 2025-06-03 14:49:22.561601 | orchestrator | 14:49:22.561 STDOUT terraform:  + mac_address = (known after apply) 2025-06-03 14:49:22.561640 | orchestrator | 14:49:22.561 STDOUT terraform:  + network_id = (known after apply) 2025-06-03 14:49:22.561680 | orchestrator | 14:49:22.561 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-03 14:49:22.561763 | orchestrator | 14:49:22.561 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-03 14:49:22.561773 | orchestrator | 14:49:22.561 STDOUT terraform:  + region = (known after apply) 2025-06-03 14:49:22.561782 | orchestrator | 14:49:22.561 STDOUT terraform:  + security_group_ids = (known after apply) 2025-06-03 14:49:22.561821 | orchestrator | 14:49:22.561 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-03 14:49:22.561831 | orchestrator | 14:49:22.561 STDOUT terraform:  + allowed_address_pairs { 2025-06-03 14:49:22.561874 | orchestrator | 14:49:22.561 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-06-03 14:49:22.561884 | orchestrator | 14:49:22.561 STDOUT terraform:  } 2025-06-03 14:49:22.561893 | orchestrator | 14:49:22.561 STDOUT terraform:  + allowed_address_pairs { 2025-06-03 14:49:22.561933 | orchestrator | 14:49:22.561 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-06-03 14:49:22.561940 | orchestrator | 14:49:22.561 STDOUT terraform:  } 2025-06-03 14:49:22.561949 | orchestrator | 14:49:22.561 STDOUT terraform:  + allowed_address_pairs { 2025-06-03 14:49:22.561985 | orchestrator | 14:49:22.561 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-06-03 14:49:22.561996 | orchestrator | 14:49:22.561 STDOUT terraform:  } 2025-06-03 14:49:22.562004 | orchestrator | 14:49:22.561 STDOUT terraform:  + allowed_address_pairs { 2025-06-03 14:49:22.562060 | orchestrator | 14:49:22.562 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-06-03 14:49:22.562075 | orchestrator | 14:49:22.562 STDOUT terraform:  } 2025-06-03 14:49:22.562084 | orchestrator | 14:49:22.562 STDOUT terraform:  + binding (known after apply) 2025-06-03 14:49:22.562119 | orchestrator | 14:49:22.562 STDOUT terraform:  + fixed_ip { 2025-06-03 14:49:22.562129 | orchestrator | 14:49:22.562 STDOUT terraform:  + ip_address = "192.168.16.12" 2025-06-03 14:49:22.562164 | orchestrator | 14:49:22.562 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-03 14:49:22.562171 | orchestrator | 14:49:22.562 STDOUT terraform:  } 2025-06-03 14:49:22.562181 | orchestrator | 14:49:22.562 STDOUT terraform:  } 2025-06-03 14:49:22.562398 | orchestrator | 14:49:22.562 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[3] will be created 2025-06-03 14:49:22.562475 | orchestrator | 14:49:22.562 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-06-03 14:49:22.562490 | orchestrator | 14:49:22.562 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-03 14:49:22.562502 | orchestrator | 14:49:22.562 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-06-03 14:49:22.562524 | orchestrator | 14:49:22.562 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-06-03 14:49:22.562535 | orchestrator | 14:49:22.562 STDOUT terraform:  + all_tags = (known after apply) 2025-06-03 14:49:22.562546 | orchestrator | 14:49:22.562 STDOUT terraform:  + device_id = (known after apply) 2025-06-03 14:49:22.562556 | orchestrator | 14:49:22.562 STDOUT terraform:  + device_owner = (known after apply) 2025-06-03 14:49:22.562567 | orchestrator | 14:49:22.562 STDOUT terraform:  + dns_assignment = (known after apply) 2025-06-03 14:49:22.562582 | orchestrator | 14:49:22.562 STDOUT terraform:  + dns_name = (known after apply) 2025-06-03 14:49:22.562593 | orchestrator | 14:49:22.562 STDOUT terraform:  + id = (known after apply) 2025-06-03 14:49:22.562669 | orchestrator | 14:49:22.562 STDOUT terraform:  + mac_address = (known after apply) 2025-06-03 14:49:22.562684 | orchestrator | 14:49:22.562 STDOUT terraform:  + network_id = (known after apply) 2025-06-03 14:49:22.562699 | orchestrator | 14:49:22.562 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-03 14:49:22.562745 | orchestrator | 14:49:22.562 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-03 14:49:22.562771 | orchestrator | 14:49:22.562 STDOUT terraform:  + region = (known after apply) 2025-06-03 14:49:22.562794 | orchestrator | 14:49:22.562 STDOUT terraform:  + security_group_ids = (known after apply) 2025-06-03 14:49:22.562863 | orchestrator | 14:49:22.562 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-03 14:49:22.562885 | orchestrator | 14:49:22.562 STDOUT terraform:  + allowed_address_pairs { 2025-06-03 14:49:22.562902 | orchestrator | 14:49:22.562 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-06-03 14:49:22.562913 | orchestrator | 14:49:22.562 STDOUT terraform:  } 2025-06-03 14:49:22.562925 | orchestrator | 14:49:22.562 STDOUT terraform:  + allowed_address_pairs { 2025-06-03 14:49:22.562939 | orchestrator | 14:49:22.562 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-06-03 14:49:22.562971 | orchestrator | 14:49:22.562 STDOUT terraform:  } 2025-06-03 14:49:22.562983 | orchestrator | 14:49:22.562 STDOUT terraform:  + allowed_address_pairs { 2025-06-03 14:49:22.562997 | orchestrator | 14:49:22.562 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-06-03 14:49:22.563008 | orchestrator | 14:49:22.562 STDOUT terraform:  } 2025-06-03 14:49:22.563018 | orchestrator | 14:49:22.562 STDOUT terraform:  + allowed_address_pairs { 2025-06-03 14:49:22.563032 | orchestrator | 14:49:22.562 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-06-03 14:49:22.563043 | orchestrator | 14:49:22.563 STDOUT terraform:  } 2025-06-03 14:49:22.563058 | orchestrator | 14:49:22.563 STDOUT terraform:  + binding (known after apply) 2025-06-03 14:49:22.563069 | orchestrator | 14:49:22.563 STDOUT terraform:  + fixed_ip { 2025-06-03 14:49:22.563083 | orchestrator | 14:49:22.563 STDOUT terraform:  + ip_address = "192.168.16.13" 2025-06-03 14:49:22.563134 | orchestrator | 14:49:22.563 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-03 14:49:22.563147 | orchestrator | 14:49:22.563 STDOUT terraform:  } 2025-06-03 14:49:22.563159 | orchestrator | 14:49:22.563 STDOUT terraform:  } 2025-06-03 14:49:22.563174 | orchestrator | 14:49:22.563 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[4] will be created 2025-06-03 14:49:22.563276 | orchestrator | 14:49:22.563 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-06-03 14:49:22.563295 | orchestrator | 14:49:22.563 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-03 14:49:22.563364 | orchestrator | 14:49:22.563 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-06-03 14:49:22.563377 | orchestrator | 14:49:22.563 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-06-03 14:49:22.563406 | orchestrator | 14:49:22.563 STDOUT terraform:  + all_tags = (known after apply) 2025-06-03 14:49:22.563421 | orchestrator | 14:49:22.563 STDOUT terraform:  + device_id = (known after apply) 2025-06-03 14:49:22.563471 | orchestrator | 14:49:22.563 STDOUT terraform:  + device_owner = (known after apply) 2025-06-03 14:49:22.563488 | orchestrator | 14:49:22.563 STDOUT terraform:  + dns_assignment = (known after apply) 2025-06-03 14:49:22.563545 | orchestrator | 14:49:22.563 STDOUT terraform:  + dns_name = (known after apply) 2025-06-03 14:49:22.563562 | orchestrator | 14:49:22.563 STDOUT terraform:  + id = (known after apply) 2025-06-03 14:49:22.563623 | orchestrator | 14:49:22.563 STDOUT terraform:  + mac_address = (known after apply) 2025-06-03 14:49:22.563640 | orchestrator | 14:49:22.563 STDOUT terraform:  + network_id = (known after apply) 2025-06-03 14:49:22.569837 | orchestrator | 14:49:22.563 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-03 14:49:22.569913 | orchestrator | 14:49:22.569 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-03 14:49:22.569925 | orchestrator | 14:49:22.569 STDOUT terraform:  + region = (known after apply) 2025-06-03 14:49:22.569951 | orchestrator | 14:49:22.569 STDOUT terraform:  + security_group_ids = (known after apply) 2025-06-03 14:49:22.569976 | orchestrator | 14:49:22.569 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-03 14:49:22.569994 | orchestrator | 14:49:22.569 STDOUT terraform:  + allowed_address_pairs { 2025-06-03 14:49:22.570041 | orchestrator | 14:49:22.569 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-06-03 14:49:22.570057 | orchestrator | 14:49:22.569 STDOUT terraform:  } 2025-06-03 14:49:22.570074 | orchestrator | 14:49:22.569 STDOUT terraform:  + allowed_address_pairs { 2025-06-03 14:49:22.570088 | orchestrator | 14:49:22.569 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-06-03 14:49:22.570105 | orchestrator | 14:49:22.569 STDOUT terraform:  } 2025-06-03 14:49:22.570134 | orchestrator | 14:49:22.569 STDOUT terraform:  + allowed_address_pairs { 2025-06-03 14:49:22.570308 | orchestrator | 14:49:22.569 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-06-03 14:49:22.570326 | orchestrator | 14:49:22.570 STDOUT terraform:  } 2025-06-03 14:49:22.570340 | orchestrator | 14:49:22.570 STDOUT terraform:  + allowed_address_pairs { 2025-06-03 14:49:22.570389 | orchestrator | 14:49:22.570 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-06-03 14:49:22.570400 | orchestrator | 14:49:22.570 STDOUT terraform:  } 2025-06-03 14:49:22.570448 | orchestrator | 14:49:22.570 STDOUT terraform:  + binding (known after apply) 2025-06-03 14:49:22.570464 | orchestrator | 14:49:22.570 STDOUT terraform:  + fixed_ip { 2025-06-03 14:49:22.570531 | orchestrator | 14:49:22.570 STDOUT terraform:  + ip_address = "192.168.16.14" 2025-06-03 14:49:22.570547 | orchestrator | 14:49:22.570 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-03 14:49:22.570870 | orchestrator | 14:49:22.570 STDOUT terraform:  } 2025-06-03 14:49:22.570886 | orchestrator | 14:49:22.570 STDOUT terraform:  } 2025-06-03 14:49:22.570894 | orchestrator | 14:49:22.570 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[5] will be created 2025-06-03 14:49:22.570903 | orchestrator | 14:49:22.570 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-06-03 14:49:22.570911 | orchestrator | 14:49:22.570 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-03 14:49:22.570919 | orchestrator | 14:49:22.570 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-06-03 14:49:22.570931 | orchestrator | 14:49:22.570 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-06-03 14:49:22.570979 | orchestrator | 14:49:22.570 STDOUT terraform:  + all_tags = (known after apply) 2025-06-03 14:49:22.571362 | orchestrator | 14:49:22.570 STDOUT terraform:  + device_id = (known after apply) 2025-06-03 14:49:22.571381 | orchestrator | 14:49:22.571 STDOUT terraform:  + device_owner = (known after apply) 2025-06-03 14:49:22.571407 | orchestrator | 14:49:22.571 STDOUT terraform:  + dns_assignment = (known after apply) 2025-06-03 14:49:22.571415 | orchestrator | 14:49:22.571 STDOUT terraform:  + dns_name = (known after apply) 2025-06-03 14:49:22.571423 | orchestrator | 14:49:22.571 STDOUT terraform:  + id = (known after apply) 2025-06-03 14:49:22.571435 | orchestrator | 14:49:22.571 STDOUT terraform:  + mac_address = (known after apply) 2025-06-03 14:49:22.571455 | orchestrator | 14:49:22.571 STDOUT terraform:  + network_id = (known after apply) 2025-06-03 14:49:22.571657 | orchestrator | 14:49:22.571 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-03 14:49:22.571670 | orchestrator | 14:49:22.571 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-03 14:49:22.571679 | orchestrator | 14:49:22.571 STDOUT terraform:  + region = (known after apply) 2025-06-03 14:49:22.571687 | orchestrator | 14:49:22.571 STDOUT terraform:  + security_group_ids = (known after apply) 2025-06-03 14:49:22.571698 | orchestrator | 14:49:22.571 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-03 14:49:22.571755 | orchestrator | 14:49:22.571 STDOUT terraform:  + allowed_address_pairs { 2025-06-03 14:49:22.571804 | orchestrator | 14:49:22.571 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-06-03 14:49:22.571814 | orchestrator | 14:49:22.571 STDOUT terraform:  } 2025-06-03 14:49:22.572763 | orchestrator | 14:49:22.571 STDOUT terraform:  + allowed_address_pairs { 2025-06-03 14:49:22.572814 | orchestrator | 14:49:22.571 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-06-03 14:49:22.572823 | orchestrator | 14:49:22.571 STDOUT terraform:  } 2025-06-03 14:49:22.572831 | orchestrator | 14:49:22.571 STDOUT terraform:  + allowed_address_pairs { 2025-06-03 14:49:22.572839 | orchestrator | 14:49:22.571 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-06-03 14:49:22.572847 | orchestrator | 14:49:22.571 STDOUT terraform:  } 2025-06-03 14:49:22.572855 | orchestrator | 14:49:22.571 STDOUT terraform:  + allowed_address_pairs { 2025-06-03 14:49:22.572863 | orchestrator | 14:49:22.572 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-06-03 14:49:22.572871 | orchestrator | 14:49:22.572 STDOUT terraform:  } 2025-06-03 14:49:22.572879 | orchestrator | 14:49:22.572 STDOUT terraform:  + binding (known after apply) 2025-06-03 14:49:22.572887 | orchestrator | 14:49:22.572 STDOUT terraform:  + fixed_ip { 2025-06-03 14:49:22.572895 | orchestrator | 14:49:22.572 STDOUT terraform:  + ip_address = "192.168.16.15" 2025-06-03 14:49:22.572903 | orchestrator | 14:49:22.572 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-03 14:49:22.572910 | orchestrator | 14:49:22.572 STDOUT terraform:  } 2025-06-03 14:49:22.572918 | orchestrator | 14:49:22.572 STDOUT terraform:  } 2025-06-03 14:49:22.572926 | orchestrator | 14:49:22.572 STDOUT terraform:  # openstack_networking_router_interface_v2.router_interface will be created 2025-06-03 14:49:22.572935 | orchestrator | 14:49:22.572 STDOUT terraform:  + resource "openstack_networking_router_interface_v2" "router_interface" { 2025-06-03 14:49:22.572942 | orchestrator | 14:49:22.572 STDOUT terraform:  + force_destroy = false 2025-06-03 14:49:22.572950 | orchestrator | 14:49:22.572 STDOUT terraform:  + id = (known after apply) 2025-06-03 14:49:22.572958 | orchestrator | 14:49:22.572 STDOUT terraform:  + port_id = (known after apply) 2025-06-03 14:49:22.572966 | orchestrator | 14:49:22.572 STDOUT terraform:  + region = (known after apply) 2025-06-03 14:49:22.572974 | orchestrator | 14:49:22.572 STDOUT terraform:  + router_id = (known after apply) 2025-06-03 14:49:22.572994 | orchestrator | 14:49:22.572 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-03 14:49:22.573002 | orchestrator | 14:49:22.572 STDOUT terraform:  } 2025-06-03 14:49:22.573015 | orchestrator | 14:49:22.572 STDOUT terraform:  # openstack_networking_router_v2.router will be created 2025-06-03 14:49:22.573023 | orchestrator | 14:49:22.572 STDOUT terraform:  + resource "openstack_networking_router_v2" "router" { 2025-06-03 14:49:22.573031 | orchestrator | 14:49:22.572 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-03 14:49:22.573039 | orchestrator | 14:49:22.572 STDOUT terraform:  + all_tags = (known after apply) 2025-06-03 14:49:22.573047 | orchestrator | 14:49:22.572 STDOUT terraform:  + availability_zone_hints = [ 2025-06-03 14:49:22.573054 | orchestrator | 14:49:22.572 STDOUT terraform:  + "nova", 2025-06-03 14:49:22.573062 | orchestrator | 14:49:22.572 STDOUT terraform:  ] 2025-06-03 14:49:22.573076 | orchestrator | 14:49:22.572 STDOUT terraform:  + distributed = (known after apply) 2025-06-03 14:49:22.573087 | orchestrator | 14:49:22.573 STDOUT terraform:  + enable_snat = (known after apply) 2025-06-03 14:49:22.573172 | orchestrator | 14:49:22.573 STDOUT terraform:  + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2025-06-03 14:49:22.573621 | orchestrator | 14:49:22.573 STDOUT terraform:  + id = (known after apply) 2025-06-03 14:49:22.573640 | orchestrator | 14:49:22.573 STDOUT terraform:  + name = "testbed" 2025-06-03 14:49:22.573649 | orchestrator | 14:49:22.573 STDOUT terraform:  + region = (known after apply) 2025-06-03 14:49:22.573657 | orchestrator | 14:49:22.573 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-03 14:49:22.573665 | orchestrator | 14:49:22.573 STDOUT terraform:  + external_fixed_ip (known after apply) 2025-06-03 14:49:22.573673 | orchestrator | 14:49:22.573 STDOUT terraform:  } 2025-06-03 14:49:22.573681 | orchestrator | 14:49:22.573 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2025-06-03 14:49:22.573690 | orchestrator | 14:49:22.573 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2025-06-03 14:49:22.573702 | orchestrator | 14:49:22.573 STDOUT terraform:  + description = "ssh" 2025-06-03 14:49:22.573710 | orchestrator | 14:49:22.573 STDOUT terraform:  + direction = "ingress" 2025-06-03 14:49:22.573721 | orchestrator | 14:49:22.573 STDOUT terraform:  + ethertype = "IPv4" 2025-06-03 14:49:22.574068 | orchestrator | 14:49:22.573 STDOUT terraform:  + id = (known after apply) 2025-06-03 14:49:22.574086 | orchestrator | 14:49:22.573 STDOUT terraform:  + port_range_max = 22 2025-06-03 14:49:22.574094 | orchestrator | 14:49:22.573 STDOUT terraform:  + port_range_min = 22 2025-06-03 14:49:22.574102 | orchestrator | 14:49:22.573 STDOUT terraform:  + protocol = "tcp" 2025-06-03 14:49:22.574110 | orchestrator | 14:49:22.573 STDOUT terraform:  + region = (known after apply) 2025-06-03 14:49:22.574118 | orchestrator | 14:49:22.573 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-03 14:49:22.574126 | orchestrator | 14:49:22.573 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-06-03 14:49:22.574145 | orchestrator | 14:49:22.573 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-03 14:49:22.574157 | orchestrator | 14:49:22.574 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-03 14:49:22.574165 | orchestrator | 14:49:22.574 STDOUT terraform:  } 2025-06-03 14:49:22.574315 | orchestrator | 14:49:22.574 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2025-06-03 14:49:22.574366 | orchestrator | 14:49:22.574 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2025-06-03 14:49:22.574375 | orchestrator | 14:49:22.574 STDOUT terraform:  + description = "wireguard" 2025-06-03 14:49:22.574387 | orchestrator | 14:49:22.574 STDOUT terraform:  + direction = "ingress" 2025-06-03 14:49:22.574880 | orchestrator | 14:49:22.574 STDOUT terraform:  + ethertype = "IPv4" 2025-06-03 14:49:22.574893 | orchestrator | 14:49:22.574 STDOUT terraform:  + id = (known after apply) 2025-06-03 14:49:22.574900 | orchestrator | 14:49:22.574 STDOUT terraform:  + port_range_max = 51820 2025-06-03 14:49:22.574907 | orchestrator | 14:49:22.574 STDOUT terraform:  + port_range_min = 51820 2025-06-03 14:49:22.574913 | orchestrator | 14:49:22.574 STDOUT terraform:  + protocol = "udp" 2025-06-03 14:49:22.574938 | orchestrator | 14:49:22.574 STDOUT terraform:  + region = (known after apply) 2025-06-03 14:49:22.574946 | orchestrator | 14:49:22.574 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-03 14:49:22.574972 | orchestrator | 14:49:22.574 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-06-03 14:49:22.575142 | orchestrator | 14:49:22.574 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-03 14:49:22.575153 | orchestrator | 14:49:22.575 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-03 14:49:22.575169 | orchestrator | 14:49:22.575 STDOUT terraform:  } 2025-06-03 14:49:22.575189 | orchestrator | 14:49:22.575 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2025-06-03 14:49:22.575945 | orchestrator | 14:49:22.575 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2025-06-03 14:49:22.575969 | orchestrator | 14:49:22.575 STDOUT terraform:  + direction = "ingress" 2025-06-03 14:49:22.575976 | orchestrator | 14:49:22.575 STDOUT terraform:  + ethertype = "IPv4" 2025-06-03 14:49:22.575993 | orchestrator | 14:49:22.575 STDOUT terraform:  + id = (known after apply) 2025-06-03 14:49:22.576000 | orchestrator | 14:49:22.575 STDOUT terraform:  + protocol = "tcp" 2025-06-03 14:49:22.576007 | orchestrator | 14:49:22.575 STDOUT terraform:  + region = (known after apply) 2025-06-03 14:49:22.576013 | orchestrator | 14:49:22.575 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-03 14:49:22.576020 | orchestrator | 14:49:22.575 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-06-03 14:49:22.576027 | orchestrator | 14:49:22.575 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-03 14:49:22.576034 | orchestrator | 14:49:22.575 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-03 14:49:22.576049 | orchestrator | 14:49:22.575 STDOUT terraform:  } 2025-06-03 14:49:22.576056 | orchestrator | 14:49:22.575 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2025-06-03 14:49:22.576063 | orchestrator | 14:49:22.575 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2025-06-03 14:49:22.576070 | orchestrator | 14:49:22.575 STDOUT terraform:  + direction = "ingress" 2025-06-03 14:49:22.576081 | orchestrator | 14:49:22.575 STDOUT terraform:  + ethertype = "IPv4" 2025-06-03 14:49:22.576087 | orchestrator | 14:49:22.575 STDOUT terraform:  + id = (known after apply) 2025-06-03 14:49:22.576094 | orchestrator | 14:49:22.575 STDOUT terraform:  + protocol = "udp" 2025-06-03 14:49:22.576101 | orchestrator | 14:49:22.576 STDOUT terraform:  + region = (known after apply) 2025-06-03 14:49:22.576107 | orchestrator | 14:49:22.576 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-03 14:49:22.576116 | orchestrator | 14:49:22.576 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-06-03 14:49:22.576163 | orchestrator | 14:49:22.576 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-03 14:49:22.576285 | orchestrator | 14:49:22.576 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-03 14:49:22.576297 | orchestrator | 14:49:22.576 STDOUT terraform:  } 2025-06-03 14:49:22.576384 | orchestrator | 14:49:22.576 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2025-06-03 14:49:22.576393 | orchestrator | 14:49:22.576 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2025-06-03 14:49:22.576402 | orchestrator | 14:49:22.576 STDOUT terraform:  + direction = "ingress" 2025-06-03 14:49:22.577619 | orchestrator | 14:49:22.576 STDOUT terraform:  + ethertype = "IPv4" 2025-06-03 14:49:22.577646 | orchestrator | 14:49:22.576 STDOUT terraform:  + id = (known after apply) 2025-06-03 14:49:22.577653 | orchestrator | 14:49:22.576 STDOUT terraform:  + protocol = "icmp" 2025-06-03 14:49:22.577660 | orchestrator | 14:49:22.576 STDOUT terraform:  + region = (known after apply) 2025-06-03 14:49:22.577667 | orchestrator | 14:49:22.576 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-03 14:49:22.577673 | orchestrator | 14:49:22.576 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-06-03 14:49:22.577679 | orchestrator | 14:49:22.576 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-03 14:49:22.577686 | orchestrator | 14:49:22.576 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-03 14:49:22.577693 | orchestrator | 14:49:22.576 STDOUT terraform:  } 2025-06-03 14:49:22.577699 | orchestrator | 14:49:22.576 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2025-06-03 14:49:22.577707 | orchestrator | 14:49:22.576 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2025-06-03 14:49:22.577713 | orchestrator | 14:49:22.576 STDOUT terraform:  + direction = "ingress" 2025-06-03 14:49:22.577720 | orchestrator | 14:49:22.576 STDOUT terraform:  + ethertype = "IPv4" 2025-06-03 14:49:22.577740 | orchestrator | 14:49:22.576 STDOUT terraform:  + id = (known after apply) 2025-06-03 14:49:22.577753 | orchestrator | 14:49:22.577 STDOUT terraform:  + protocol = "tcp" 2025-06-03 14:49:22.577760 | orchestrator | 14:49:22.577 STDOUT terraform:  + region = (known after apply) 2025-06-03 14:49:22.577766 | orchestrator | 14:49:22.577 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-03 14:49:22.577773 | orchestrator | 14:49:22.577 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-06-03 14:49:22.577779 | orchestrator | 14:49:22.577 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-03 14:49:22.577786 | orchestrator | 14:49:22.577 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-03 14:49:22.577792 | orchestrator | 14:49:22.577 STDOUT terraform:  } 2025-06-03 14:49:22.577799 | orchestrator | 14:49:22.577 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2025-06-03 14:49:22.577806 | orchestrator | 14:49:22.577 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2025-06-03 14:49:22.577812 | orchestrator | 14:49:22.577 STDOUT terraform:  + direction = "ingress" 2025-06-03 14:49:22.577818 | orchestrator | 14:49:22.577 STDOUT terraform:  + ethertype = "IPv4" 2025-06-03 14:49:22.577825 | orchestrator | 14:49:22.577 STDOUT terraform:  + id = (known after apply) 2025-06-03 14:49:22.577832 | orchestrator | 14:49:22.577 STDOUT terraform:  + protocol = "udp" 2025-06-03 14:49:22.577838 | orchestrator | 14:49:22.577 STDOUT terraform:  + region = (known after apply) 2025-06-03 14:49:22.577845 | orchestrator | 14:49:22.577 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-03 14:49:22.577856 | orchestrator | 14:49:22.577 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-06-03 14:49:22.577863 | orchestrator | 14:49:22.577 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-03 14:49:22.577870 | orchestrator | 14:49:22.577 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-03 14:49:22.577876 | orchestrator | 14:49:22.577 STDOUT terraform:  } 2025-06-03 14:49:22.577883 | orchestrator | 14:49:22.577 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2025-06-03 14:49:22.577890 | orchestrator | 14:49:22.577 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2025-06-03 14:49:22.577899 | orchestrator | 14:49:22.577 STDOUT terraform:  + direction = "ingress" 2025-06-03 14:49:22.577906 | orchestrator | 14:49:22.577 STDOUT terraform:  + ethertype = "IPv4" 2025-06-03 14:49:22.577940 | orchestrator | 14:49:22.577 STDOUT terraform:  + id = (known after apply) 2025-06-03 14:49:22.577969 | orchestrator | 14:49:22.577 STDOUT terraform:  + protocol = "icmp" 2025-06-03 14:49:22.578043 | orchestrator | 14:49:22.577 STDOUT terraform:  + region = (known after apply) 2025-06-03 14:49:22.578081 | orchestrator | 14:49:22.578 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-03 14:49:22.578113 | orchestrator | 14:49:22.578 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-06-03 14:49:22.578159 | orchestrator | 14:49:22.578 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-03 14:49:22.578199 | orchestrator | 14:49:22.578 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-03 14:49:22.578208 | orchestrator | 14:49:22.578 STDOUT terraform:  } 2025-06-03 14:49:22.578373 | orchestrator | 14:49:22.578 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2025-06-03 14:49:22.578411 | orchestrator | 14:49:22.578 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2025-06-03 14:49:22.578424 | orchestrator | 14:49:22.578 STDOUT terraform:  + description = "vrrp" 2025-06-03 14:49:22.578438 | orchestrator | 14:49:22.578 STDOUT terraform:  + direction = "ingress" 2025-06-03 14:49:22.578447 | orchestrator | 14:49:22.578 STDOUT terraform:  + ethertype = "IPv4" 2025-06-03 14:49:22.578491 | orchestrator | 14:49:22.578 STDOUT terraform:  + id = (known after apply) 2025-06-03 14:49:22.578511 | orchestrator | 14:49:22.578 STDOUT terraform:  + protocol = "112" 2025-06-03 14:49:22.578562 | orchestrator | 14:49:22.578 STDOUT terraform:  + region = (known after apply) 2025-06-03 14:49:22.578625 | orchestrator | 14:49:22.578 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-03 14:49:22.578681 | orchestrator | 14:49:22.578 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-06-03 14:49:22.578730 | orchestrator | 14:49:22.578 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-03 14:49:22.578774 | orchestrator | 14:49:22.578 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-03 14:49:22.578784 | orchestrator | 14:49:22.578 STDOUT terraform:  } 2025-06-03 14:49:22.578882 | orchestrator | 14:49:22.578 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_management will be created 2025-06-03 14:49:22.578926 | orchestrator | 14:49:22.578 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_management" { 2025-06-03 14:49:22.579003 | orchestrator | 14:49:22.578 STDOUT terraform:  + all_tags = (known after apply) 2025-06-03 14:49:22.579131 | orchestrator | 14:49:22.578 STDOUT terraform:  + description = "management security group" 2025-06-03 14:49:22.579148 | orchestrator | 14:49:22.579 STDOUT terraform:  + id = (known after apply) 2025-06-03 14:49:22.579157 | orchestrator | 14:49:22.579 STDOUT terraform:  + name = "testbed-management" 2025-06-03 14:49:22.579247 | orchestrator | 14:49:22.579 STDOUT terraform:  + region = (known after apply) 2025-06-03 14:49:22.579320 | orchestrator | 14:49:22.579 STDOUT terraform:  + stateful = (known after apply) 2025-06-03 14:49:22.579344 | orchestrator | 14:49:22.579 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-03 14:49:22.579357 | orchestrator | 14:49:22.579 STDOUT terraform:  } 2025-06-03 14:49:22.579441 | orchestrator | 14:49:22.579 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_node will be created 2025-06-03 14:49:22.579506 | orchestrator | 14:49:22.579 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_node" { 2025-06-03 14:49:22.579559 | orchestrator | 14:49:22.579 STDOUT terraform:  + all_tags = (known after apply) 2025-06-03 14:49:22.579574 | orchestrator | 14:49:22.579 STDOUT terraform:  + description = "node security group" 2025-06-03 14:49:22.579630 | orchestrator | 14:49:22.579 STDOUT terraform:  + id = (known after apply) 2025-06-03 14:49:22.579645 | orchestrator | 14:49:22.579 STDOUT terraform:  + name = "testbed-node" 2025-06-03 14:49:22.579684 | orchestrator | 14:49:22.579 STDOUT terraform:  + region = (known after apply) 2025-06-03 14:49:22.579733 | orchestrator | 14:49:22.579 STDOUT terraform:  + stateful = (known after apply) 2025-06-03 14:49:22.579748 | orchestrator | 14:49:22.579 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-03 14:49:22.579758 | orchestrator | 14:49:22.579 STDOUT terraform:  } 2025-06-03 14:49:22.579823 | orchestrator | 14:49:22.579 STDOUT terraform:  # openstack_networking_subnet_v2.subnet_management will be created 2025-06-03 14:49:22.579881 | orchestrator | 14:49:22.579 STDOUT terraform:  + resource "openstack_networking_subnet_v2" "subnet_management" { 2025-06-03 14:49:22.579896 | orchestrator | 14:49:22.579 STDOUT terraform:  + all_tags = (known after apply) 2025-06-03 14:49:22.579957 | orchestrator | 14:49:22.579 STDOUT terraform:  + cidr = "192.168.16.0/20" 2025-06-03 14:49:22.579971 | orchestrator | 14:49:22.579 STDOUT terraform:  + dns_nameservers = [ 2025-06-03 14:49:22.579985 | orchestrator | 14:49:22.579 STDOUT terraform:  + "8.8.8.8", 2025-06-03 14:49:22.579997 | orchestrator | 14:49:22.579 STDOUT terraform:  + "9.9.9.9", 2025-06-03 14:49:22.580010 | orchestrator | 14:49:22.579 STDOUT terraform:  ] 2025-06-03 14:49:22.580059 | orchestrator | 14:49:22.580 STDOUT terraform:  + enable_dhcp = true 2025-06-03 14:49:22.580073 | orchestrator | 14:49:22.580 STDOUT terraform:  + gateway_ip = (known after apply) 2025-06-03 14:49:22.580124 | orchestrator | 14:49:22.580 STDOUT terraform:  + id = (known after apply) 2025-06-03 14:49:22.580139 | orchestrator | 14:49:22.580 STDOUT terraform:  + ip_version = 4 2025-06-03 14:49:22.580178 | orchestrator | 14:49:22.580 STDOUT terraform:  + ipv6_address_mode = (known after apply) 2025-06-03 14:49:22.580241 | orchestrator | 14:49:22.580 STDOUT terraform:  + ipv6_ra_mode = (known after apply) 2025-06-03 14:49:22.580257 | orchestrator | 14:49:22.580 STDOUT terraform:  + name = "subnet-testbed-management" 2025-06-03 14:49:22.580310 | orchestrator | 14:49:22.580 STDOUT terraform:  + network_id = (known after apply) 2025-06-03 14:49:22.580325 | orchestrator | 14:49:22.580 STDOUT terraform:  + no_gateway = false 2025-06-03 14:49:22.580365 | orchestrator | 14:49:22.580 STDOUT terraform:  + region = (known after apply) 2025-06-03 14:49:22.580415 | orchestrator | 14:49:22.580 STDOUT terraform:  + service_types = (known after apply) 2025-06-03 14:49:22.580430 | orchestrator | 14:49:22.580 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-03 14:49:22.580442 | orchestrator | 14:49:22.580 STDOUT terraform:  + allocation_pool { 2025-06-03 14:49:22.580482 | orchestrator | 14:49:22.580 STDOUT terraform:  + end = "192.168.31.250" 2025-06-03 14:49:22.580496 | orchestrator | 14:49:22.580 STDOUT terraform:  + start = "192.168.31.200" 2025-06-03 14:49:22.580509 | orchestrator | 14:49:22.580 STDOUT terraform:  } 2025-06-03 14:49:22.580522 | orchestrator | 14:49:22.580 STDOUT terraform:  } 2025-06-03 14:49:22.580541 | orchestrator | 14:49:22.580 STDOUT terraform:  # terraform_data.image will be created 2025-06-03 14:49:22.580626 | orchestrator | 14:49:22.580 STDOUT terraform:  + resource "terraform_data" "image" { 2025-06-03 14:49:22.580644 | orchestrator | 14:49:22.580 STDOUT terraform:  + id = (known after apply) 2025-06-03 14:49:22.580654 | orchestrator | 14:49:22.580 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-06-03 14:49:22.580664 | orchestrator | 14:49:22.580 STDOUT terraform:  + output = (known after apply) 2025-06-03 14:49:22.580676 | orchestrator | 14:49:22.580 STDOUT terraform:  } 2025-06-03 14:49:22.580689 | orchestrator | 14:49:22.580 STDOUT terraform:  # terraform_data.image_node will be created 2025-06-03 14:49:22.580743 | orchestrator | 14:49:22.580 STDOUT terraform:  + resource "terraform_data" "image_node" { 2025-06-03 14:49:22.580758 | orchestrator | 14:49:22.580 STDOUT terraform:  + id = (known after apply) 2025-06-03 14:49:22.580771 | orchestrator | 14:49:22.580 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-06-03 14:49:22.580821 | orchestrator | 14:49:22.580 STDOUT terraform:  + output = (known after apply) 2025-06-03 14:49:22.580833 | orchestrator | 14:49:22.580 STDOUT terraform:  } 2025-06-03 14:49:22.580846 | orchestrator | 14:49:22.580 STDOUT terraform: Plan: 64 to add, 0 to change, 0 to destroy. 2025-06-03 14:49:22.580859 | orchestrator | 14:49:22.580 STDOUT terraform: Changes to Outputs: 2025-06-03 14:49:22.580952 | orchestrator | 14:49:22.580 STDOUT terraform:  + manager_address = (sensitive value) 2025-06-03 14:49:22.580966 | orchestrator | 14:49:22.580 STDOUT terraform:  + private_key = (sensitive value) 2025-06-03 14:49:22.628501 | orchestrator | 14:49:22.628 STDOUT terraform: terraform_data.image_node: Creating... 2025-06-03 14:49:22.776727 | orchestrator | 14:49:22.776 STDOUT terraform: terraform_data.image_node: Creation complete after 0s [id=5c341815-b219-8dfc-6426-5013a00bdb16] 2025-06-03 14:49:22.777797 | orchestrator | 14:49:22.777 STDOUT terraform: terraform_data.image: Creating... 2025-06-03 14:49:22.782378 | orchestrator | 14:49:22.782 STDOUT terraform: terraform_data.image: Creation complete after 0s [id=10aea28d-a48b-3689-60a5-a1fa9d27ae54] 2025-06-03 14:49:22.791758 | orchestrator | 14:49:22.791 STDOUT terraform: data.openstack_images_image_v2.image: Reading... 2025-06-03 14:49:22.798262 | orchestrator | 14:49:22.798 STDOUT terraform: openstack_networking_network_v2.net_management: Creating... 2025-06-03 14:49:22.798763 | orchestrator | 14:49:22.798 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2025-06-03 14:49:22.799039 | orchestrator | 14:49:22.798 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2025-06-03 14:49:22.800254 | orchestrator | 14:49:22.800 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2025-06-03 14:49:22.801162 | orchestrator | 14:49:22.801 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2025-06-03 14:49:22.801765 | orchestrator | 14:49:22.801 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2025-06-03 14:49:22.806785 | orchestrator | 14:49:22.806 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2025-06-03 14:49:22.807387 | orchestrator | 14:49:22.807 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2025-06-03 14:49:22.812258 | orchestrator | 14:49:22.812 STDOUT terraform: openstack_compute_keypair_v2.key: Creating... 2025-06-03 14:49:23.275741 | orchestrator | 14:49:23.275 STDOUT terraform: data.openstack_images_image_v2.image: Read complete after 0s [id=cd9ae1ce-c4eb-4380-9087-2aa040df6990] 2025-06-03 14:49:23.278551 | orchestrator | 14:49:23.278 STDOUT terraform: data.openstack_images_image_v2.image_node: Reading... 2025-06-03 14:49:23.340093 | orchestrator | 14:49:23.339 STDOUT terraform: data.openstack_images_image_v2.image_node: Read complete after 0s [id=cd9ae1ce-c4eb-4380-9087-2aa040df6990] 2025-06-03 14:49:23.348960 | orchestrator | 14:49:23.348 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2025-06-03 14:49:23.379597 | orchestrator | 14:49:23.379 STDOUT terraform: openstack_compute_keypair_v2.key: Creation complete after 0s [id=testbed] 2025-06-03 14:49:23.388430 | orchestrator | 14:49:23.388 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2025-06-03 14:49:28.802858 | orchestrator | 14:49:28.802 STDOUT terraform: openstack_networking_network_v2.net_management: Creation complete after 6s [id=7788da18-4f13-4fb7-a798-50015026ed5b] 2025-06-03 14:49:28.814253 | orchestrator | 14:49:28.813 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2025-06-03 14:49:32.800902 | orchestrator | 14:49:32.800 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Still creating... [10s elapsed] 2025-06-03 14:49:32.801850 | orchestrator | 14:49:32.801 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Still creating... [10s elapsed] 2025-06-03 14:49:32.802984 | orchestrator | 14:49:32.802 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Still creating... [10s elapsed] 2025-06-03 14:49:32.806592 | orchestrator | 14:49:32.806 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Still creating... [10s elapsed] 2025-06-03 14:49:32.806719 | orchestrator | 14:49:32.806 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Still creating... [10s elapsed] 2025-06-03 14:49:32.808741 | orchestrator | 14:49:32.808 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Still creating... [10s elapsed] 2025-06-03 14:49:32.808937 | orchestrator | 14:49:32.808 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Still creating... [10s elapsed] 2025-06-03 14:49:33.350408 | orchestrator | 14:49:33.350 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Still creating... [10s elapsed] 2025-06-03 14:49:33.389650 | orchestrator | 14:49:33.389 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Still creating... [10s elapsed] 2025-06-03 14:49:33.439484 | orchestrator | 14:49:33.439 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 10s [id=ffe2a0ca-5a38-47a9-803d-00b473435346] 2025-06-03 14:49:33.447489 | orchestrator | 14:49:33.447 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2025-06-03 14:49:33.448949 | orchestrator | 14:49:33.448 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 10s [id=88cf38eb-fdbf-404b-9f1d-cd32f6bedf4b] 2025-06-03 14:49:33.460894 | orchestrator | 14:49:33.460 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2025-06-03 14:49:33.461375 | orchestrator | 14:49:33.461 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 10s [id=5c901d52-eede-42c5-873c-7ade3ca032e1] 2025-06-03 14:49:33.477502 | orchestrator | 14:49:33.477 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2025-06-03 14:49:33.478075 | orchestrator | 14:49:33.477 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 10s [id=ed092372-9559-4d48-8a48-c44bdb9ee908] 2025-06-03 14:49:33.480635 | orchestrator | 14:49:33.480 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 10s [id=61b072b3-0d8d-4456-975d-55fef61370d3] 2025-06-03 14:49:33.483839 | orchestrator | 14:49:33.483 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2025-06-03 14:49:33.485033 | orchestrator | 14:49:33.484 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 10s [id=fa411336-a154-4770-b6c1-ce8fec2c95f2] 2025-06-03 14:49:33.487529 | orchestrator | 14:49:33.487 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2025-06-03 14:49:33.495007 | orchestrator | 14:49:33.494 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 10s [id=b1c5376b-f7c7-4aac-a0b2-3df8be7d9631] 2025-06-03 14:49:33.497362 | orchestrator | 14:49:33.497 STDOUT terraform: local_file.id_rsa_pub: Creating... 2025-06-03 14:49:33.502485 | orchestrator | 14:49:33.502 STDOUT terraform: local_file.id_rsa_pub: Creation complete after 1s [id=d29bb0acbee966f60a37faf1203a5bfef51fb14f] 2025-06-03 14:49:33.505681 | orchestrator | 14:49:33.505 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2025-06-03 14:49:33.513714 | orchestrator | 14:49:33.513 STDOUT terraform: local_sensitive_file.id_rsa: Creating... 2025-06-03 14:49:33.518148 | orchestrator | 14:49:33.518 STDOUT terraform: local_sensitive_file.id_rsa: Creation complete after 0s [id=be1ab0cb03f7ab2264065238762d92e543d793d7] 2025-06-03 14:49:33.524421 | orchestrator | 14:49:33.524 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creating... 2025-06-03 14:49:33.572679 | orchestrator | 14:49:33.572 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 11s [id=b4ac7e97-dff3-4114-bb9f-c387d4fd8c04] 2025-06-03 14:49:33.592107 | orchestrator | 14:49:33.591 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 11s [id=35e8ec34-b9aa-4705-9105-50464be240ba] 2025-06-03 14:49:38.814839 | orchestrator | 14:49:38.814 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Still creating... [10s elapsed] 2025-06-03 14:49:39.152885 | orchestrator | 14:49:39.152 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 10s [id=f322991d-6658-4d17-a0b4-4261e9038a6c] 2025-06-03 14:49:39.484038 | orchestrator | 14:49:39.483 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creation complete after 5s [id=370af146-d51e-43a3-b608-2dc3262812f2] 2025-06-03 14:49:39.491601 | orchestrator | 14:49:39.491 STDOUT terraform: openstack_networking_router_v2.router: Creating... 2025-06-03 14:49:43.448606 | orchestrator | 14:49:43.448 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Still creating... [10s elapsed] 2025-06-03 14:49:43.461992 | orchestrator | 14:49:43.461 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Still creating... [10s elapsed] 2025-06-03 14:49:43.478765 | orchestrator | 14:49:43.478 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Still creating... [10s elapsed] 2025-06-03 14:49:43.485706 | orchestrator | 14:49:43.485 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Still creating... [10s elapsed] 2025-06-03 14:49:43.489510 | orchestrator | 14:49:43.489 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Still creating... [10s elapsed] 2025-06-03 14:49:43.507042 | orchestrator | 14:49:43.506 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Still creating... [10s elapsed] 2025-06-03 14:49:43.886772 | orchestrator | 14:49:43.886 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 11s [id=7e1af086-74b9-4b96-b1ab-e1589a6f5143] 2025-06-03 14:49:44.473952 | orchestrator | 14:49:44.473 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 11s [id=7cf15848-6f73-4878-927d-31873f9154b7] 2025-06-03 14:49:44.476463 | orchestrator | 14:49:44.475 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 11s [id=ec1efc19-1b1e-4f39-8db8-97e27f5004aa] 2025-06-03 14:49:44.476566 | orchestrator | 14:49:44.476 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 11s [id=420efcde-9aa9-4277-94e6-eff067055985] 2025-06-03 14:49:44.476594 | orchestrator | 14:49:44.476 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 11s [id=f6db3371-ad49-4dd9-a193-0ba30b3292ba] 2025-06-03 14:49:44.476777 | orchestrator | 14:49:44.476 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 10s [id=8b65d546-d325-4a0d-b120-75afa88c00de] 2025-06-03 14:49:47.443869 | orchestrator | 14:49:47.443 STDOUT terraform: openstack_networking_router_v2.router: Creation complete after 8s [id=7e4b5e6a-e9e0-4cac-8125-39dff1492c14] 2025-06-03 14:49:47.454353 | orchestrator | 14:49:47.454 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creating... 2025-06-03 14:49:47.454428 | orchestrator | 14:49:47.454 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creating... 2025-06-03 14:49:47.462467 | orchestrator | 14:49:47.462 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creating... 2025-06-03 14:49:47.641280 | orchestrator | 14:49:47.640 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creation complete after 1s [id=ae149ccc-51d2-4f48-b837-67112330a72e] 2025-06-03 14:49:47.661149 | orchestrator | 14:49:47.660 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2025-06-03 14:49:47.668726 | orchestrator | 14:49:47.668 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2025-06-03 14:49:47.670178 | orchestrator | 14:49:47.670 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creating... 2025-06-03 14:49:47.681453 | orchestrator | 14:49:47.681 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2025-06-03 14:49:47.682657 | orchestrator | 14:49:47.682 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creating... 2025-06-03 14:49:47.683825 | orchestrator | 14:49:47.683 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creating... 2025-06-03 14:49:47.683902 | orchestrator | 14:49:47.683 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creating... 2025-06-03 14:49:47.684044 | orchestrator | 14:49:47.683 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2025-06-03 14:49:47.903775 | orchestrator | 14:49:47.903 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creation complete after 1s [id=723cf5ed-534f-452d-9c4a-2ee93b75cdc2] 2025-06-03 14:49:47.922492 | orchestrator | 14:49:47.922 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creating... 2025-06-03 14:49:48.078154 | orchestrator | 14:49:48.077 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 0s [id=9bc0b138-2630-445c-a902-edd276eddb0c] 2025-06-03 14:49:48.090622 | orchestrator | 14:49:48.090 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creating... 2025-06-03 14:49:48.459856 | orchestrator | 14:49:48.459 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 0s [id=62e89f43-0bad-4bb6-97df-6c3c83de8a85] 2025-06-03 14:49:48.472407 | orchestrator | 14:49:48.472 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2025-06-03 14:49:48.596368 | orchestrator | 14:49:48.595 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 1s [id=8e5e2f58-b156-4fe5-8c93-ffd505a988cf] 2025-06-03 14:49:48.602349 | orchestrator | 14:49:48.601 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2025-06-03 14:49:48.749760 | orchestrator | 14:49:48.749 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 1s [id=e5e36d48-408c-4d19-b713-320f9e2c422a] 2025-06-03 14:49:48.761109 | orchestrator | 14:49:48.760 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creating... 2025-06-03 14:49:48.783106 | orchestrator | 14:49:48.782 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 1s [id=f492d9cf-2fce-4802-adc5-d6cb9755de8c] 2025-06-03 14:49:48.789605 | orchestrator | 14:49:48.789 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2025-06-03 14:49:48.911776 | orchestrator | 14:49:48.911 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 0s [id=9a77e390-deed-4e6e-96ab-ef16a257bcfd] 2025-06-03 14:49:48.918288 | orchestrator | 14:49:48.917 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2025-06-03 14:49:49.209822 | orchestrator | 14:49:49.209 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 0s [id=f505e18c-700f-4f9c-8245-86acff97eeb5] 2025-06-03 14:49:49.216021 | orchestrator | 14:49:49.215 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2025-06-03 14:49:49.377091 | orchestrator | 14:49:49.376 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 0s [id=eefe86fd-1468-4c90-b8e5-55d0bef7f6e4] 2025-06-03 14:49:49.550407 | orchestrator | 14:49:49.549 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 1s [id=58b4a533-8d55-4c90-8790-f4f5506fc9cd] 2025-06-03 14:49:53.403189 | orchestrator | 14:49:53.403 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creation complete after 5s [id=603d5fc2-3688-4ac2-ad93-1c270d9837f1] 2025-06-03 14:49:53.429363 | orchestrator | 14:49:53.429 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creation complete after 5s [id=32415d1d-67fb-48e2-b420-3bab920d9b0d] 2025-06-03 14:49:53.458154 | orchestrator | 14:49:53.457 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creation complete after 5s [id=7b8af06f-254f-4061-87ed-4c3956a5691e] 2025-06-03 14:49:53.697758 | orchestrator | 14:49:53.697 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creation complete after 6s [id=5f97ddf9-261a-482a-82db-2cf9abe8acb0] 2025-06-03 14:49:53.714309 | orchestrator | 14:49:53.714 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creation complete after 6s [id=61569d3b-6c5d-4c4e-9520-ad4d2b6958f3] 2025-06-03 14:49:54.247163 | orchestrator | 14:49:54.246 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creation complete after 6s [id=5721f1e5-5961-4a31-80ae-729cd26239e0] 2025-06-03 14:49:54.313408 | orchestrator | 14:49:54.313 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creation complete after 5s [id=0f82f557-156a-4e50-882c-54446549ffd3] 2025-06-03 14:49:55.683759 | orchestrator | 14:49:55.683 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creation complete after 9s [id=1bdbb960-2864-461a-9c0d-7c11e979f488] 2025-06-03 14:49:55.706530 | orchestrator | 14:49:55.706 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2025-06-03 14:49:55.720319 | orchestrator | 14:49:55.720 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creating... 2025-06-03 14:49:55.727581 | orchestrator | 14:49:55.727 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creating... 2025-06-03 14:49:55.733558 | orchestrator | 14:49:55.733 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creating... 2025-06-03 14:49:55.741188 | orchestrator | 14:49:55.740 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creating... 2025-06-03 14:49:55.744560 | orchestrator | 14:49:55.744 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creating... 2025-06-03 14:49:55.775561 | orchestrator | 14:49:55.775 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creating... 2025-06-03 14:50:02.578211 | orchestrator | 14:50:02.577 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 7s [id=87c65336-3ffb-4c44-9430-be3c00af5e6b] 2025-06-03 14:50:02.592530 | orchestrator | 14:50:02.592 STDOUT terraform: local_file.MANAGER_ADDRESS: Creating... 2025-06-03 14:50:02.593171 | orchestrator | 14:50:02.592 STDOUT terraform: local_file.inventory: Creating... 2025-06-03 14:50:02.595939 | orchestrator | 14:50:02.595 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2025-06-03 14:50:02.603419 | orchestrator | 14:50:02.602 STDOUT terraform: local_file.inventory: Creation complete after 0s [id=8c33b490f4b0b356b7dfa9e1f5c98040977aba7e] 2025-06-03 14:50:02.603965 | orchestrator | 14:50:02.603 STDOUT terraform: local_file.MANAGER_ADDRESS: Creation complete after 0s [id=8e2009fb67981825c3b2c33690fd0b50566ae865] 2025-06-03 14:50:03.408042 | orchestrator | 14:50:03.407 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 0s [id=87c65336-3ffb-4c44-9430-be3c00af5e6b] 2025-06-03 14:50:05.724953 | orchestrator | 14:50:05.724 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2025-06-03 14:50:06.161785 | orchestrator | 14:50:05.742 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2025-06-03 14:50:06.161858 | orchestrator | 14:50:05.742 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2025-06-03 14:50:06.161874 | orchestrator | 14:50:05.748 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2025-06-03 14:50:06.161886 | orchestrator | 14:50:05.751 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2025-06-03 14:50:06.161921 | orchestrator | 14:50:05.776 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2025-06-03 14:50:15.725415 | orchestrator | 14:50:15.725 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2025-06-03 14:50:15.744195 | orchestrator | 14:50:15.743 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2025-06-03 14:50:15.744281 | orchestrator | 14:50:15.744 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2025-06-03 14:50:15.748289 | orchestrator | 14:50:15.748 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2025-06-03 14:50:15.752603 | orchestrator | 14:50:15.752 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2025-06-03 14:50:15.777775 | orchestrator | 14:50:15.777 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2025-06-03 14:50:25.728419 | orchestrator | 14:50:25.728 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2025-06-03 14:50:25.744611 | orchestrator | 14:50:25.744 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2025-06-03 14:50:25.744695 | orchestrator | 14:50:25.744 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2025-06-03 14:50:25.748770 | orchestrator | 14:50:25.748 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2025-06-03 14:50:25.753171 | orchestrator | 14:50:25.752 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2025-06-03 14:50:25.778583 | orchestrator | 14:50:25.778 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2025-06-03 14:50:26.253552 | orchestrator | 14:50:26.250 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creation complete after 30s [id=4e4d8392-f4cb-4c48-b5b2-abd5c5a23597] 2025-06-03 14:50:26.350181 | orchestrator | 14:50:26.349 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creation complete after 30s [id=a1f93003-bcef-4880-aaa4-279eca0356e5] 2025-06-03 14:50:26.362754 | orchestrator | 14:50:26.362 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creation complete after 30s [id=1abe5905-a44e-4543-9124-47ea5ff9ad89] 2025-06-03 14:50:26.452136 | orchestrator | 14:50:26.451 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creation complete after 30s [id=bce1d8a5-6ada-4366-9d2f-ca5df28da14d] 2025-06-03 14:50:26.482849 | orchestrator | 14:50:26.482 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creation complete after 30s [id=2c917967-7990-4880-879e-6ca707835745] 2025-06-03 14:50:26.543729 | orchestrator | 14:50:26.543 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creation complete after 31s [id=7aeade28-3a86-4237-9238-b88ac67fa1c0] 2025-06-03 14:50:26.562666 | orchestrator | 14:50:26.562 STDOUT terraform: null_resource.node_semaphore: Creating... 2025-06-03 14:50:26.565433 | orchestrator | 14:50:26.565 STDOUT terraform: null_resource.node_semaphore: Creation complete after 0s [id=8177237098438128989] 2025-06-03 14:50:26.572665 | orchestrator | 14:50:26.572 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2025-06-03 14:50:26.572812 | orchestrator | 14:50:26.572 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2025-06-03 14:50:26.576530 | orchestrator | 14:50:26.576 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2025-06-03 14:50:26.583236 | orchestrator | 14:50:26.583 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2025-06-03 14:50:26.584001 | orchestrator | 14:50:26.583 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2025-06-03 14:50:26.585428 | orchestrator | 14:50:26.585 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2025-06-03 14:50:26.594912 | orchestrator | 14:50:26.592 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2025-06-03 14:50:26.594954 | orchestrator | 14:50:26.592 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2025-06-03 14:50:26.594963 | orchestrator | 14:50:26.593 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2025-06-03 14:50:26.596909 | orchestrator | 14:50:26.596 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creating... 2025-06-03 14:50:31.906946 | orchestrator | 14:50:31.906 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 5s [id=a1f93003-bcef-4880-aaa4-279eca0356e5/b1c5376b-f7c7-4aac-a0b2-3df8be7d9631] 2025-06-03 14:50:31.933068 | orchestrator | 14:50:31.932 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 5s [id=bce1d8a5-6ada-4366-9d2f-ca5df28da14d/ffe2a0ca-5a38-47a9-803d-00b473435346] 2025-06-03 14:50:31.948911 | orchestrator | 14:50:31.948 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 5s [id=7aeade28-3a86-4237-9238-b88ac67fa1c0/b4ac7e97-dff3-4114-bb9f-c387d4fd8c04] 2025-06-03 14:50:31.971895 | orchestrator | 14:50:31.971 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 5s [id=bce1d8a5-6ada-4366-9d2f-ca5df28da14d/ed092372-9559-4d48-8a48-c44bdb9ee908] 2025-06-03 14:50:31.973832 | orchestrator | 14:50:31.973 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 5s [id=7aeade28-3a86-4237-9238-b88ac67fa1c0/61b072b3-0d8d-4456-975d-55fef61370d3] 2025-06-03 14:50:32.153893 | orchestrator | 14:50:32.153 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 5s [id=bce1d8a5-6ada-4366-9d2f-ca5df28da14d/fa411336-a154-4770-b6c1-ce8fec2c95f2] 2025-06-03 14:50:32.167719 | orchestrator | 14:50:32.167 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 5s [id=a1f93003-bcef-4880-aaa4-279eca0356e5/35e8ec34-b9aa-4705-9105-50464be240ba] 2025-06-03 14:50:32.182180 | orchestrator | 14:50:32.181 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 5s [id=7aeade28-3a86-4237-9238-b88ac67fa1c0/5c901d52-eede-42c5-873c-7ade3ca032e1] 2025-06-03 14:50:32.202531 | orchestrator | 14:50:32.202 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 5s [id=a1f93003-bcef-4880-aaa4-279eca0356e5/88cf38eb-fdbf-404b-9f1d-cd32f6bedf4b] 2025-06-03 14:50:36.599689 | orchestrator | 14:50:36.599 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2025-06-03 14:50:46.600168 | orchestrator | 14:50:46.599 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2025-06-03 14:50:47.638504 | orchestrator | 14:50:47.638 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creation complete after 21s [id=2361346b-951e-4bae-8a2d-69c7f274691d] 2025-06-03 14:50:47.663622 | orchestrator | 14:50:47.658 STDOUT terraform: Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2025-06-03 14:50:47.663696 | orchestrator | 14:50:47.658 STDOUT terraform: Outputs: 2025-06-03 14:50:47.663712 | orchestrator | 14:50:47.658 STDOUT terraform: manager_address = 2025-06-03 14:50:47.663722 | orchestrator | 14:50:47.658 STDOUT terraform: private_key = 2025-06-03 14:50:47.741616 | orchestrator | ok: Runtime: 0:01:37.381450 2025-06-03 14:50:47.767488 | 2025-06-03 14:50:47.767627 | TASK [Create infrastructure (stable)] 2025-06-03 14:50:48.304191 | orchestrator | skipping: Conditional result was False 2025-06-03 14:50:48.313147 | 2025-06-03 14:50:48.313273 | TASK [Fetch manager address] 2025-06-03 14:50:48.789508 | orchestrator | ok 2025-06-03 14:50:48.801529 | 2025-06-03 14:50:48.801657 | TASK [Set manager_host address] 2025-06-03 14:50:48.880441 | orchestrator | ok 2025-06-03 14:50:48.890165 | 2025-06-03 14:50:48.890272 | LOOP [Update ansible collections] 2025-06-03 14:50:59.667813 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-06-03 14:50:59.668453 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-06-03 14:50:59.668554 | orchestrator | Starting galaxy collection install process 2025-06-03 14:50:59.668623 | orchestrator | Process install dependency map 2025-06-03 14:50:59.668680 | orchestrator | Starting collection install process 2025-06-03 14:50:59.668731 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed02/.ansible/collections/ansible_collections/osism/commons' 2025-06-03 14:50:59.668788 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed02/.ansible/collections/ansible_collections/osism/commons 2025-06-03 14:50:59.668850 | orchestrator | osism.commons:999.0.0 was installed successfully 2025-06-03 14:50:59.668960 | orchestrator | ok: Item: commons Runtime: 0:00:10.371992 2025-06-03 14:51:02.195553 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-06-03 14:51:02.195723 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-06-03 14:51:02.195778 | orchestrator | Starting galaxy collection install process 2025-06-03 14:51:02.195819 | orchestrator | Process install dependency map 2025-06-03 14:51:02.195856 | orchestrator | Starting collection install process 2025-06-03 14:51:02.195891 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed02/.ansible/collections/ansible_collections/osism/services' 2025-06-03 14:51:02.195925 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed02/.ansible/collections/ansible_collections/osism/services 2025-06-03 14:51:02.195958 | orchestrator | osism.services:999.0.0 was installed successfully 2025-06-03 14:51:02.196009 | orchestrator | ok: Item: services Runtime: 0:00:02.224833 2025-06-03 14:51:02.218707 | 2025-06-03 14:51:02.218904 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-06-03 14:51:12.780443 | orchestrator | ok 2025-06-03 14:51:12.793285 | 2025-06-03 14:51:12.793422 | TASK [Wait a little longer for the manager so that everything is ready] 2025-06-03 14:52:12.842037 | orchestrator | ok 2025-06-03 14:52:12.857086 | 2025-06-03 14:52:12.857240 | TASK [Fetch manager ssh hostkey] 2025-06-03 14:52:14.435540 | orchestrator | Output suppressed because no_log was given 2025-06-03 14:52:14.454678 | 2025-06-03 14:52:14.454952 | TASK [Get ssh keypair from terraform environment] 2025-06-03 14:52:14.998154 | orchestrator | ok: Runtime: 0:00:00.015675 2025-06-03 14:52:15.015128 | 2025-06-03 14:52:15.015297 | TASK [Point out that the following task takes some time and does not give any output] 2025-06-03 14:52:15.063917 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2025-06-03 14:52:15.074555 | 2025-06-03 14:52:15.074693 | TASK [Run manager part 0] 2025-06-03 14:52:17.005359 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-06-03 14:52:17.075893 | orchestrator | 2025-06-03 14:52:17.075949 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2025-06-03 14:52:17.075958 | orchestrator | 2025-06-03 14:52:17.075972 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2025-06-03 14:52:18.931382 | orchestrator | ok: [testbed-manager] 2025-06-03 14:52:18.931448 | orchestrator | 2025-06-03 14:52:18.931478 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-06-03 14:52:18.931493 | orchestrator | 2025-06-03 14:52:18.931506 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-03 14:52:21.101427 | orchestrator | ok: [testbed-manager] 2025-06-03 14:52:21.101483 | orchestrator | 2025-06-03 14:52:21.101491 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-06-03 14:52:21.768097 | orchestrator | ok: [testbed-manager] 2025-06-03 14:52:21.768186 | orchestrator | 2025-06-03 14:52:21.768196 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-06-03 14:52:21.822474 | orchestrator | skipping: [testbed-manager] 2025-06-03 14:52:21.822539 | orchestrator | 2025-06-03 14:52:21.822551 | orchestrator | TASK [Update package cache] **************************************************** 2025-06-03 14:52:21.849653 | orchestrator | skipping: [testbed-manager] 2025-06-03 14:52:21.849715 | orchestrator | 2025-06-03 14:52:21.849726 | orchestrator | TASK [Install required packages] *********************************************** 2025-06-03 14:52:21.875792 | orchestrator | skipping: [testbed-manager] 2025-06-03 14:52:21.875852 | orchestrator | 2025-06-03 14:52:21.875860 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-06-03 14:52:21.904648 | orchestrator | skipping: [testbed-manager] 2025-06-03 14:52:21.904716 | orchestrator | 2025-06-03 14:52:21.904727 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-06-03 14:52:21.936469 | orchestrator | skipping: [testbed-manager] 2025-06-03 14:52:21.936546 | orchestrator | 2025-06-03 14:52:21.936563 | orchestrator | TASK [Fail if Ubuntu version is lower than 22.04] ****************************** 2025-06-03 14:52:21.964523 | orchestrator | skipping: [testbed-manager] 2025-06-03 14:52:21.964571 | orchestrator | 2025-06-03 14:52:21.964579 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2025-06-03 14:52:21.997407 | orchestrator | skipping: [testbed-manager] 2025-06-03 14:52:21.997512 | orchestrator | 2025-06-03 14:52:21.997537 | orchestrator | TASK [Set APT options on manager] ********************************************** 2025-06-03 14:52:22.776941 | orchestrator | changed: [testbed-manager] 2025-06-03 14:52:22.777011 | orchestrator | 2025-06-03 14:52:22.777022 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2025-06-03 14:55:43.978457 | orchestrator | changed: [testbed-manager] 2025-06-03 14:55:43.978508 | orchestrator | 2025-06-03 14:55:43.978521 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-06-03 14:57:06.458997 | orchestrator | changed: [testbed-manager] 2025-06-03 14:57:06.459052 | orchestrator | 2025-06-03 14:57:06.459062 | orchestrator | TASK [Install required packages] *********************************************** 2025-06-03 14:57:26.360945 | orchestrator | changed: [testbed-manager] 2025-06-03 14:57:26.360999 | orchestrator | 2025-06-03 14:57:26.361009 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-06-03 14:57:35.466390 | orchestrator | changed: [testbed-manager] 2025-06-03 14:57:35.466484 | orchestrator | 2025-06-03 14:57:35.466501 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-06-03 14:57:35.517176 | orchestrator | ok: [testbed-manager] 2025-06-03 14:57:35.517399 | orchestrator | 2025-06-03 14:57:35.517409 | orchestrator | TASK [Get current user] ******************************************************** 2025-06-03 14:57:36.341595 | orchestrator | ok: [testbed-manager] 2025-06-03 14:57:36.341648 | orchestrator | 2025-06-03 14:57:36.341658 | orchestrator | TASK [Create venv directory] *************************************************** 2025-06-03 14:57:37.064165 | orchestrator | changed: [testbed-manager] 2025-06-03 14:57:37.064271 | orchestrator | 2025-06-03 14:57:37.064289 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2025-06-03 14:57:43.555498 | orchestrator | changed: [testbed-manager] 2025-06-03 14:57:43.555537 | orchestrator | 2025-06-03 14:57:43.555559 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2025-06-03 14:57:49.548821 | orchestrator | changed: [testbed-manager] 2025-06-03 14:57:49.548931 | orchestrator | 2025-06-03 14:57:49.548954 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2025-06-03 14:57:52.224031 | orchestrator | changed: [testbed-manager] 2025-06-03 14:57:52.224065 | orchestrator | 2025-06-03 14:57:52.224071 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2025-06-03 14:57:54.008941 | orchestrator | changed: [testbed-manager] 2025-06-03 14:57:54.009008 | orchestrator | 2025-06-03 14:57:54.009023 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2025-06-03 14:57:55.152696 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-06-03 14:57:55.152849 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-06-03 14:57:55.152857 | orchestrator | 2025-06-03 14:57:55.152862 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2025-06-03 14:57:55.200555 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-06-03 14:57:55.201412 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-06-03 14:57:55.201450 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-06-03 14:57:55.201464 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-06-03 14:58:05.592498 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-06-03 14:58:05.592601 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-06-03 14:58:05.592618 | orchestrator | 2025-06-03 14:58:05.592631 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2025-06-03 14:58:06.167569 | orchestrator | changed: [testbed-manager] 2025-06-03 14:58:06.167808 | orchestrator | 2025-06-03 14:58:06.167950 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2025-06-03 15:00:26.331485 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2025-06-03 15:00:26.331770 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2025-06-03 15:00:26.331838 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2025-06-03 15:00:26.331853 | orchestrator | 2025-06-03 15:00:26.331866 | orchestrator | TASK [Install local collections] *********************************************** 2025-06-03 15:00:28.643038 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2025-06-03 15:00:28.643123 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2025-06-03 15:00:28.643138 | orchestrator | 2025-06-03 15:00:28.643151 | orchestrator | PLAY [Create operator user] **************************************************** 2025-06-03 15:00:28.643164 | orchestrator | 2025-06-03 15:00:28.643175 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-03 15:00:30.036593 | orchestrator | ok: [testbed-manager] 2025-06-03 15:00:30.036677 | orchestrator | 2025-06-03 15:00:30.036695 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-06-03 15:00:30.077329 | orchestrator | ok: [testbed-manager] 2025-06-03 15:00:30.077376 | orchestrator | 2025-06-03 15:00:30.077382 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-06-03 15:00:30.186790 | orchestrator | ok: [testbed-manager] 2025-06-03 15:00:30.186839 | orchestrator | 2025-06-03 15:00:30.186847 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-06-03 15:00:30.989669 | orchestrator | changed: [testbed-manager] 2025-06-03 15:00:30.989747 | orchestrator | 2025-06-03 15:00:30.989761 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-06-03 15:00:31.738449 | orchestrator | changed: [testbed-manager] 2025-06-03 15:00:31.738564 | orchestrator | 2025-06-03 15:00:31.738582 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-06-03 15:00:33.079935 | orchestrator | changed: [testbed-manager] => (item=adm) 2025-06-03 15:00:33.080178 | orchestrator | changed: [testbed-manager] => (item=sudo) 2025-06-03 15:00:33.080198 | orchestrator | 2025-06-03 15:00:33.080225 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-06-03 15:00:34.517323 | orchestrator | changed: [testbed-manager] 2025-06-03 15:00:34.517623 | orchestrator | 2025-06-03 15:00:34.517643 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-06-03 15:00:36.278862 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2025-06-03 15:00:36.278950 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2025-06-03 15:00:36.278966 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2025-06-03 15:00:36.278979 | orchestrator | 2025-06-03 15:00:36.278992 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-06-03 15:00:36.857945 | orchestrator | changed: [testbed-manager] 2025-06-03 15:00:36.858006 | orchestrator | 2025-06-03 15:00:36.858051 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-06-03 15:00:36.929580 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:00:36.929649 | orchestrator | 2025-06-03 15:00:36.929664 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-06-03 15:00:37.748914 | orchestrator | changed: [testbed-manager] => (item=None) 2025-06-03 15:00:37.749004 | orchestrator | changed: [testbed-manager] 2025-06-03 15:00:37.749028 | orchestrator | 2025-06-03 15:00:37.749040 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-06-03 15:00:37.785022 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:00:37.785101 | orchestrator | 2025-06-03 15:00:37.785117 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-06-03 15:00:37.823196 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:00:37.823257 | orchestrator | 2025-06-03 15:00:37.823266 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-06-03 15:00:37.857674 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:00:37.857738 | orchestrator | 2025-06-03 15:00:37.857750 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-06-03 15:00:37.907761 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:00:37.907837 | orchestrator | 2025-06-03 15:00:37.907853 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-06-03 15:00:38.673011 | orchestrator | ok: [testbed-manager] 2025-06-03 15:00:38.673110 | orchestrator | 2025-06-03 15:00:38.673142 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-06-03 15:00:38.673166 | orchestrator | 2025-06-03 15:00:38.673191 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-03 15:00:40.050788 | orchestrator | ok: [testbed-manager] 2025-06-03 15:00:40.050879 | orchestrator | 2025-06-03 15:00:40.050895 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2025-06-03 15:00:41.004962 | orchestrator | changed: [testbed-manager] 2025-06-03 15:00:41.005036 | orchestrator | 2025-06-03 15:00:41.005050 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-03 15:00:41.005063 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2025-06-03 15:00:41.005073 | orchestrator | 2025-06-03 15:00:41.428502 | orchestrator | ok: Runtime: 0:08:25.703727 2025-06-03 15:00:41.446157 | 2025-06-03 15:00:41.446300 | TASK [Point out that the log in on the manager is now possible] 2025-06-03 15:00:41.486118 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2025-06-03 15:00:41.496096 | 2025-06-03 15:00:41.496217 | TASK [Point out that the following task takes some time and does not give any output] 2025-06-03 15:00:41.545124 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2025-06-03 15:00:41.555721 | 2025-06-03 15:00:41.555845 | TASK [Run manager part 1 + 2] 2025-06-03 15:00:42.528902 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-06-03 15:00:42.589526 | orchestrator | 2025-06-03 15:00:42.589606 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2025-06-03 15:00:42.589624 | orchestrator | 2025-06-03 15:00:42.589655 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-03 15:00:45.515184 | orchestrator | ok: [testbed-manager] 2025-06-03 15:00:45.515274 | orchestrator | 2025-06-03 15:00:45.515317 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-06-03 15:00:45.560056 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:00:45.560110 | orchestrator | 2025-06-03 15:00:45.560120 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-06-03 15:00:45.607468 | orchestrator | ok: [testbed-manager] 2025-06-03 15:00:45.607558 | orchestrator | 2025-06-03 15:00:45.607576 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-06-03 15:00:45.653644 | orchestrator | ok: [testbed-manager] 2025-06-03 15:00:45.653715 | orchestrator | 2025-06-03 15:00:45.653732 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-06-03 15:00:45.716242 | orchestrator | ok: [testbed-manager] 2025-06-03 15:00:45.716326 | orchestrator | 2025-06-03 15:00:45.716343 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-06-03 15:00:45.775896 | orchestrator | ok: [testbed-manager] 2025-06-03 15:00:45.775977 | orchestrator | 2025-06-03 15:00:45.775994 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-06-03 15:00:45.818378 | orchestrator | included: /home/zuul-testbed02/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2025-06-03 15:00:45.818449 | orchestrator | 2025-06-03 15:00:45.818511 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-06-03 15:00:46.547661 | orchestrator | ok: [testbed-manager] 2025-06-03 15:00:46.547720 | orchestrator | 2025-06-03 15:00:46.547729 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-06-03 15:00:46.597979 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:00:46.598036 | orchestrator | 2025-06-03 15:00:46.598044 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-06-03 15:00:47.964958 | orchestrator | changed: [testbed-manager] 2025-06-03 15:00:47.965022 | orchestrator | 2025-06-03 15:00:47.965033 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-06-03 15:00:48.558679 | orchestrator | ok: [testbed-manager] 2025-06-03 15:00:48.558743 | orchestrator | 2025-06-03 15:00:48.558753 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-06-03 15:00:49.691512 | orchestrator | changed: [testbed-manager] 2025-06-03 15:00:49.691598 | orchestrator | 2025-06-03 15:00:49.691616 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-06-03 15:01:02.874296 | orchestrator | changed: [testbed-manager] 2025-06-03 15:01:02.874399 | orchestrator | 2025-06-03 15:01:02.874417 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-06-03 15:01:03.577780 | orchestrator | ok: [testbed-manager] 2025-06-03 15:01:03.577864 | orchestrator | 2025-06-03 15:01:03.577883 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-06-03 15:01:03.637415 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:01:03.637487 | orchestrator | 2025-06-03 15:01:03.637495 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2025-06-03 15:01:04.583330 | orchestrator | changed: [testbed-manager] 2025-06-03 15:01:04.583395 | orchestrator | 2025-06-03 15:01:04.583410 | orchestrator | TASK [Copy SSH private key] **************************************************** 2025-06-03 15:01:05.537516 | orchestrator | changed: [testbed-manager] 2025-06-03 15:01:05.537606 | orchestrator | 2025-06-03 15:01:05.537624 | orchestrator | TASK [Create configuration directory] ****************************************** 2025-06-03 15:01:06.107274 | orchestrator | changed: [testbed-manager] 2025-06-03 15:01:06.107311 | orchestrator | 2025-06-03 15:01:06.107317 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2025-06-03 15:01:06.152982 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-06-03 15:01:06.153105 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-06-03 15:01:06.153132 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-06-03 15:01:06.153347 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-06-03 15:01:10.702676 | orchestrator | changed: [testbed-manager] 2025-06-03 15:01:10.702760 | orchestrator | 2025-06-03 15:01:10.702775 | orchestrator | TASK [Install python requirements in venv] ************************************* 2025-06-03 15:01:19.620302 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2025-06-03 15:01:19.620404 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2025-06-03 15:01:19.620421 | orchestrator | ok: [testbed-manager] => (item=packaging) 2025-06-03 15:01:19.620434 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2025-06-03 15:01:19.620493 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2025-06-03 15:01:19.620505 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2025-06-03 15:01:19.620517 | orchestrator | 2025-06-03 15:01:19.620530 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2025-06-03 15:01:20.668275 | orchestrator | changed: [testbed-manager] 2025-06-03 15:01:20.668383 | orchestrator | 2025-06-03 15:01:20.668410 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2025-06-03 15:01:20.715667 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:01:20.715748 | orchestrator | 2025-06-03 15:01:20.715763 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2025-06-03 15:01:23.769065 | orchestrator | changed: [testbed-manager] 2025-06-03 15:01:23.769192 | orchestrator | 2025-06-03 15:01:23.769208 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2025-06-03 15:01:23.813196 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:01:23.813287 | orchestrator | 2025-06-03 15:01:23.813303 | orchestrator | TASK [Run manager part 2] ****************************************************** 2025-06-03 15:03:00.632943 | orchestrator | changed: [testbed-manager] 2025-06-03 15:03:00.633040 | orchestrator | 2025-06-03 15:03:00.633059 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-06-03 15:03:01.745196 | orchestrator | ok: [testbed-manager] 2025-06-03 15:03:01.745285 | orchestrator | 2025-06-03 15:03:01.745303 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-03 15:03:01.745319 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2025-06-03 15:03:01.745331 | orchestrator | 2025-06-03 15:03:02.173923 | orchestrator | ok: Runtime: 0:02:19.974768 2025-06-03 15:03:02.190628 | 2025-06-03 15:03:02.190784 | TASK [Reboot manager] 2025-06-03 15:03:03.728511 | orchestrator | ok: Runtime: 0:00:00.957941 2025-06-03 15:03:03.746519 | 2025-06-03 15:03:03.746657 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-06-03 15:03:17.654262 | orchestrator | ok 2025-06-03 15:03:17.664979 | 2025-06-03 15:03:17.665125 | TASK [Wait a little longer for the manager so that everything is ready] 2025-06-03 15:04:17.716837 | orchestrator | ok 2025-06-03 15:04:17.727497 | 2025-06-03 15:04:17.727626 | TASK [Deploy manager + bootstrap nodes] 2025-06-03 15:04:20.309170 | orchestrator | 2025-06-03 15:04:20.309455 | orchestrator | # DEPLOY MANAGER 2025-06-03 15:04:20.309485 | orchestrator | 2025-06-03 15:04:20.309501 | orchestrator | + set -e 2025-06-03 15:04:20.309516 | orchestrator | + echo 2025-06-03 15:04:20.309531 | orchestrator | + echo '# DEPLOY MANAGER' 2025-06-03 15:04:20.309549 | orchestrator | + echo 2025-06-03 15:04:20.309598 | orchestrator | + cat /opt/manager-vars.sh 2025-06-03 15:04:20.312737 | orchestrator | export NUMBER_OF_NODES=6 2025-06-03 15:04:20.312779 | orchestrator | 2025-06-03 15:04:20.312792 | orchestrator | export CEPH_VERSION=reef 2025-06-03 15:04:20.312806 | orchestrator | export CONFIGURATION_VERSION=main 2025-06-03 15:04:20.312819 | orchestrator | export MANAGER_VERSION=latest 2025-06-03 15:04:20.312844 | orchestrator | export OPENSTACK_VERSION=2024.2 2025-06-03 15:04:20.312856 | orchestrator | 2025-06-03 15:04:20.312873 | orchestrator | export ARA=false 2025-06-03 15:04:20.312885 | orchestrator | export DEPLOY_MODE=manager 2025-06-03 15:04:20.312903 | orchestrator | export TEMPEST=false 2025-06-03 15:04:20.312915 | orchestrator | export IS_ZUUL=true 2025-06-03 15:04:20.312926 | orchestrator | 2025-06-03 15:04:20.312944 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.16 2025-06-03 15:04:20.312955 | orchestrator | export EXTERNAL_API=false 2025-06-03 15:04:20.312967 | orchestrator | 2025-06-03 15:04:20.312978 | orchestrator | export IMAGE_USER=ubuntu 2025-06-03 15:04:20.312991 | orchestrator | export IMAGE_NODE_USER=ubuntu 2025-06-03 15:04:20.313002 | orchestrator | 2025-06-03 15:04:20.313014 | orchestrator | export CEPH_STACK=ceph-ansible 2025-06-03 15:04:20.313032 | orchestrator | 2025-06-03 15:04:20.313044 | orchestrator | + echo 2025-06-03 15:04:20.313056 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-06-03 15:04:20.313724 | orchestrator | ++ export INTERACTIVE=false 2025-06-03 15:04:20.313749 | orchestrator | ++ INTERACTIVE=false 2025-06-03 15:04:20.313765 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-06-03 15:04:20.313778 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-06-03 15:04:20.313821 | orchestrator | + source /opt/manager-vars.sh 2025-06-03 15:04:20.313834 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-06-03 15:04:20.313846 | orchestrator | ++ NUMBER_OF_NODES=6 2025-06-03 15:04:20.313920 | orchestrator | ++ export CEPH_VERSION=reef 2025-06-03 15:04:20.313936 | orchestrator | ++ CEPH_VERSION=reef 2025-06-03 15:04:20.313948 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-06-03 15:04:20.313959 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-06-03 15:04:20.313970 | orchestrator | ++ export MANAGER_VERSION=latest 2025-06-03 15:04:20.313981 | orchestrator | ++ MANAGER_VERSION=latest 2025-06-03 15:04:20.313992 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-06-03 15:04:20.314013 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-06-03 15:04:20.314057 | orchestrator | ++ export ARA=false 2025-06-03 15:04:20.314068 | orchestrator | ++ ARA=false 2025-06-03 15:04:20.314085 | orchestrator | ++ export DEPLOY_MODE=manager 2025-06-03 15:04:20.314096 | orchestrator | ++ DEPLOY_MODE=manager 2025-06-03 15:04:20.314107 | orchestrator | ++ export TEMPEST=false 2025-06-03 15:04:20.314118 | orchestrator | ++ TEMPEST=false 2025-06-03 15:04:20.314129 | orchestrator | ++ export IS_ZUUL=true 2025-06-03 15:04:20.314140 | orchestrator | ++ IS_ZUUL=true 2025-06-03 15:04:20.314152 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.16 2025-06-03 15:04:20.314163 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.16 2025-06-03 15:04:20.314174 | orchestrator | ++ export EXTERNAL_API=false 2025-06-03 15:04:20.314185 | orchestrator | ++ EXTERNAL_API=false 2025-06-03 15:04:20.314196 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-06-03 15:04:20.314207 | orchestrator | ++ IMAGE_USER=ubuntu 2025-06-03 15:04:20.314218 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-06-03 15:04:20.314229 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-06-03 15:04:20.314240 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-06-03 15:04:20.314251 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-06-03 15:04:20.314263 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2025-06-03 15:04:20.358832 | orchestrator | + docker version 2025-06-03 15:04:20.617564 | orchestrator | Client: Docker Engine - Community 2025-06-03 15:04:20.617686 | orchestrator | Version: 27.5.1 2025-06-03 15:04:20.617705 | orchestrator | API version: 1.47 2025-06-03 15:04:20.617717 | orchestrator | Go version: go1.22.11 2025-06-03 15:04:20.617729 | orchestrator | Git commit: 9f9e405 2025-06-03 15:04:20.617740 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-06-03 15:04:20.617752 | orchestrator | OS/Arch: linux/amd64 2025-06-03 15:04:20.617764 | orchestrator | Context: default 2025-06-03 15:04:20.617775 | orchestrator | 2025-06-03 15:04:20.617787 | orchestrator | Server: Docker Engine - Community 2025-06-03 15:04:20.617798 | orchestrator | Engine: 2025-06-03 15:04:20.617810 | orchestrator | Version: 27.5.1 2025-06-03 15:04:20.617821 | orchestrator | API version: 1.47 (minimum version 1.24) 2025-06-03 15:04:20.617866 | orchestrator | Go version: go1.22.11 2025-06-03 15:04:20.617878 | orchestrator | Git commit: 4c9b3b0 2025-06-03 15:04:20.617889 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-06-03 15:04:20.617900 | orchestrator | OS/Arch: linux/amd64 2025-06-03 15:04:20.617911 | orchestrator | Experimental: false 2025-06-03 15:04:20.617923 | orchestrator | containerd: 2025-06-03 15:04:20.617934 | orchestrator | Version: 1.7.27 2025-06-03 15:04:20.617945 | orchestrator | GitCommit: 05044ec0a9a75232cad458027ca83437aae3f4da 2025-06-03 15:04:20.617957 | orchestrator | runc: 2025-06-03 15:04:20.617968 | orchestrator | Version: 1.2.5 2025-06-03 15:04:20.617979 | orchestrator | GitCommit: v1.2.5-0-g59923ef 2025-06-03 15:04:20.617990 | orchestrator | docker-init: 2025-06-03 15:04:20.618001 | orchestrator | Version: 0.19.0 2025-06-03 15:04:20.618013 | orchestrator | GitCommit: de40ad0 2025-06-03 15:04:20.621710 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2025-06-03 15:04:20.631858 | orchestrator | + set -e 2025-06-03 15:04:20.631951 | orchestrator | + source /opt/manager-vars.sh 2025-06-03 15:04:20.631965 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-06-03 15:04:20.631976 | orchestrator | ++ NUMBER_OF_NODES=6 2025-06-03 15:04:20.631986 | orchestrator | ++ export CEPH_VERSION=reef 2025-06-03 15:04:20.631996 | orchestrator | ++ CEPH_VERSION=reef 2025-06-03 15:04:20.632007 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-06-03 15:04:20.632018 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-06-03 15:04:20.632028 | orchestrator | ++ export MANAGER_VERSION=latest 2025-06-03 15:04:20.632038 | orchestrator | ++ MANAGER_VERSION=latest 2025-06-03 15:04:20.632048 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-06-03 15:04:20.632057 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-06-03 15:04:20.632068 | orchestrator | ++ export ARA=false 2025-06-03 15:04:20.632078 | orchestrator | ++ ARA=false 2025-06-03 15:04:20.632088 | orchestrator | ++ export DEPLOY_MODE=manager 2025-06-03 15:04:20.632098 | orchestrator | ++ DEPLOY_MODE=manager 2025-06-03 15:04:20.632108 | orchestrator | ++ export TEMPEST=false 2025-06-03 15:04:20.632118 | orchestrator | ++ TEMPEST=false 2025-06-03 15:04:20.632128 | orchestrator | ++ export IS_ZUUL=true 2025-06-03 15:04:20.632137 | orchestrator | ++ IS_ZUUL=true 2025-06-03 15:04:20.632147 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.16 2025-06-03 15:04:20.632157 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.16 2025-06-03 15:04:20.632167 | orchestrator | ++ export EXTERNAL_API=false 2025-06-03 15:04:20.632177 | orchestrator | ++ EXTERNAL_API=false 2025-06-03 15:04:20.632186 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-06-03 15:04:20.632196 | orchestrator | ++ IMAGE_USER=ubuntu 2025-06-03 15:04:20.632206 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-06-03 15:04:20.632216 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-06-03 15:04:20.632226 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-06-03 15:04:20.632236 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-06-03 15:04:20.632246 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-06-03 15:04:20.632256 | orchestrator | ++ export INTERACTIVE=false 2025-06-03 15:04:20.632265 | orchestrator | ++ INTERACTIVE=false 2025-06-03 15:04:20.632275 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-06-03 15:04:20.632288 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-06-03 15:04:20.632437 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-06-03 15:04:20.632453 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-06-03 15:04:20.632467 | orchestrator | + /opt/configuration/scripts/set-ceph-version.sh reef 2025-06-03 15:04:20.639780 | orchestrator | + set -e 2025-06-03 15:04:20.639861 | orchestrator | + VERSION=reef 2025-06-03 15:04:20.640967 | orchestrator | ++ grep '^ceph_version:' /opt/configuration/environments/manager/configuration.yml 2025-06-03 15:04:20.646971 | orchestrator | + [[ -n ceph_version: reef ]] 2025-06-03 15:04:20.647022 | orchestrator | + sed -i 's/ceph_version: .*/ceph_version: reef/g' /opt/configuration/environments/manager/configuration.yml 2025-06-03 15:04:20.653538 | orchestrator | + /opt/configuration/scripts/set-openstack-version.sh 2024.2 2025-06-03 15:04:20.659695 | orchestrator | + set -e 2025-06-03 15:04:20.659718 | orchestrator | + VERSION=2024.2 2025-06-03 15:04:20.659910 | orchestrator | ++ grep '^openstack_version:' /opt/configuration/environments/manager/configuration.yml 2025-06-03 15:04:20.663919 | orchestrator | + [[ -n openstack_version: 2024.2 ]] 2025-06-03 15:04:20.663941 | orchestrator | + sed -i 's/openstack_version: .*/openstack_version: 2024.2/g' /opt/configuration/environments/manager/configuration.yml 2025-06-03 15:04:20.669282 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2025-06-03 15:04:20.670411 | orchestrator | ++ semver latest 7.0.0 2025-06-03 15:04:20.733244 | orchestrator | + [[ -1 -ge 0 ]] 2025-06-03 15:04:20.733322 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-06-03 15:04:20.733338 | orchestrator | + echo 'enable_osism_kubernetes: true' 2025-06-03 15:04:20.733351 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2025-06-03 15:04:20.774730 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-06-03 15:04:20.777170 | orchestrator | + source /opt/venv/bin/activate 2025-06-03 15:04:20.778181 | orchestrator | ++ deactivate nondestructive 2025-06-03 15:04:20.778211 | orchestrator | ++ '[' -n '' ']' 2025-06-03 15:04:20.778227 | orchestrator | ++ '[' -n '' ']' 2025-06-03 15:04:20.778239 | orchestrator | ++ hash -r 2025-06-03 15:04:20.778380 | orchestrator | ++ '[' -n '' ']' 2025-06-03 15:04:20.778430 | orchestrator | ++ unset VIRTUAL_ENV 2025-06-03 15:04:20.778452 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-06-03 15:04:20.778470 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-06-03 15:04:20.778589 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-06-03 15:04:20.778605 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-06-03 15:04:20.778709 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-06-03 15:04:20.778724 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-06-03 15:04:20.778741 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-06-03 15:04:20.778782 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-06-03 15:04:20.778907 | orchestrator | ++ export PATH 2025-06-03 15:04:20.778923 | orchestrator | ++ '[' -n '' ']' 2025-06-03 15:04:20.778939 | orchestrator | ++ '[' -z '' ']' 2025-06-03 15:04:20.778991 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-06-03 15:04:20.779006 | orchestrator | ++ PS1='(venv) ' 2025-06-03 15:04:20.779017 | orchestrator | ++ export PS1 2025-06-03 15:04:20.779029 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-06-03 15:04:20.779121 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-06-03 15:04:20.779141 | orchestrator | ++ hash -r 2025-06-03 15:04:20.779585 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2025-06-03 15:04:22.047748 | orchestrator | 2025-06-03 15:04:22.047864 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2025-06-03 15:04:22.047884 | orchestrator | 2025-06-03 15:04:22.047902 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-06-03 15:04:22.626983 | orchestrator | ok: [testbed-manager] 2025-06-03 15:04:22.627081 | orchestrator | 2025-06-03 15:04:22.627100 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-06-03 15:04:23.612950 | orchestrator | changed: [testbed-manager] 2025-06-03 15:04:23.613054 | orchestrator | 2025-06-03 15:04:23.613070 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2025-06-03 15:04:23.613082 | orchestrator | 2025-06-03 15:04:23.613094 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-03 15:04:26.035753 | orchestrator | ok: [testbed-manager] 2025-06-03 15:04:26.035825 | orchestrator | 2025-06-03 15:04:26.035840 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2025-06-03 15:04:26.091456 | orchestrator | ok: [testbed-manager] 2025-06-03 15:04:26.091528 | orchestrator | 2025-06-03 15:04:26.091548 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2025-06-03 15:04:26.525645 | orchestrator | changed: [testbed-manager] 2025-06-03 15:04:26.525728 | orchestrator | 2025-06-03 15:04:26.525747 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2025-06-03 15:04:26.570648 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:04:26.570716 | orchestrator | 2025-06-03 15:04:26.570730 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-06-03 15:04:26.901506 | orchestrator | changed: [testbed-manager] 2025-06-03 15:04:26.901591 | orchestrator | 2025-06-03 15:04:26.901621 | orchestrator | TASK [Use insecure glance configuration] *************************************** 2025-06-03 15:04:26.948574 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:04:26.948676 | orchestrator | 2025-06-03 15:04:26.948692 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2025-06-03 15:04:27.274562 | orchestrator | ok: [testbed-manager] 2025-06-03 15:04:27.274636 | orchestrator | 2025-06-03 15:04:27.274653 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2025-06-03 15:04:27.389021 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:04:27.389100 | orchestrator | 2025-06-03 15:04:27.389117 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2025-06-03 15:04:27.389132 | orchestrator | 2025-06-03 15:04:27.389148 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-03 15:04:29.198473 | orchestrator | ok: [testbed-manager] 2025-06-03 15:04:29.198561 | orchestrator | 2025-06-03 15:04:29.198579 | orchestrator | TASK [Apply traefik role] ****************************************************** 2025-06-03 15:04:29.301850 | orchestrator | included: osism.services.traefik for testbed-manager 2025-06-03 15:04:29.301921 | orchestrator | 2025-06-03 15:04:29.301936 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2025-06-03 15:04:29.369175 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2025-06-03 15:04:29.369239 | orchestrator | 2025-06-03 15:04:29.369253 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2025-06-03 15:04:30.426504 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2025-06-03 15:04:30.426593 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2025-06-03 15:04:30.426608 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2025-06-03 15:04:30.426620 | orchestrator | 2025-06-03 15:04:30.426633 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2025-06-03 15:04:32.219846 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2025-06-03 15:04:32.219948 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2025-06-03 15:04:32.219970 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2025-06-03 15:04:32.219985 | orchestrator | 2025-06-03 15:04:32.220000 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2025-06-03 15:04:32.855718 | orchestrator | changed: [testbed-manager] => (item=None) 2025-06-03 15:04:32.855809 | orchestrator | changed: [testbed-manager] 2025-06-03 15:04:32.855825 | orchestrator | 2025-06-03 15:04:32.855838 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2025-06-03 15:04:33.489260 | orchestrator | changed: [testbed-manager] => (item=None) 2025-06-03 15:04:33.490253 | orchestrator | changed: [testbed-manager] 2025-06-03 15:04:33.490297 | orchestrator | 2025-06-03 15:04:33.490311 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2025-06-03 15:04:33.536261 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:04:33.536341 | orchestrator | 2025-06-03 15:04:33.536356 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2025-06-03 15:04:33.889685 | orchestrator | ok: [testbed-manager] 2025-06-03 15:04:33.889767 | orchestrator | 2025-06-03 15:04:33.889783 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2025-06-03 15:04:33.957905 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2025-06-03 15:04:33.957989 | orchestrator | 2025-06-03 15:04:33.958004 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2025-06-03 15:04:34.962721 | orchestrator | changed: [testbed-manager] 2025-06-03 15:04:34.962810 | orchestrator | 2025-06-03 15:04:34.962827 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2025-06-03 15:04:35.769940 | orchestrator | changed: [testbed-manager] 2025-06-03 15:04:35.770080 | orchestrator | 2025-06-03 15:04:35.770101 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2025-06-03 15:04:46.236785 | orchestrator | changed: [testbed-manager] 2025-06-03 15:04:46.236895 | orchestrator | 2025-06-03 15:04:46.236913 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2025-06-03 15:04:46.290341 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:04:46.290464 | orchestrator | 2025-06-03 15:04:46.290483 | orchestrator | PLAY [Deploy manager service] ************************************************** 2025-06-03 15:04:46.290496 | orchestrator | 2025-06-03 15:04:46.290509 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-03 15:04:48.147750 | orchestrator | ok: [testbed-manager] 2025-06-03 15:04:48.147868 | orchestrator | 2025-06-03 15:04:48.147936 | orchestrator | TASK [Apply manager role] ****************************************************** 2025-06-03 15:04:48.254508 | orchestrator | included: osism.services.manager for testbed-manager 2025-06-03 15:04:48.254602 | orchestrator | 2025-06-03 15:04:48.254617 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2025-06-03 15:04:48.312759 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2025-06-03 15:04:48.312854 | orchestrator | 2025-06-03 15:04:48.312872 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2025-06-03 15:04:50.947696 | orchestrator | ok: [testbed-manager] 2025-06-03 15:04:50.947783 | orchestrator | 2025-06-03 15:04:50.947795 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2025-06-03 15:04:50.999667 | orchestrator | ok: [testbed-manager] 2025-06-03 15:04:50.999762 | orchestrator | 2025-06-03 15:04:50.999780 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2025-06-03 15:04:51.124188 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2025-06-03 15:04:51.124284 | orchestrator | 2025-06-03 15:04:51.124299 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2025-06-03 15:04:53.966919 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2025-06-03 15:04:53.967020 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2025-06-03 15:04:53.967062 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2025-06-03 15:04:53.967076 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2025-06-03 15:04:53.967088 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2025-06-03 15:04:53.967100 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2025-06-03 15:04:53.967111 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2025-06-03 15:04:53.967123 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2025-06-03 15:04:53.967135 | orchestrator | 2025-06-03 15:04:53.967147 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2025-06-03 15:04:54.593369 | orchestrator | changed: [testbed-manager] 2025-06-03 15:04:54.593494 | orchestrator | 2025-06-03 15:04:54.593511 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2025-06-03 15:04:55.251307 | orchestrator | changed: [testbed-manager] 2025-06-03 15:04:55.251439 | orchestrator | 2025-06-03 15:04:55.251458 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2025-06-03 15:04:55.330403 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2025-06-03 15:04:55.330489 | orchestrator | 2025-06-03 15:04:55.330503 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2025-06-03 15:04:56.565870 | orchestrator | changed: [testbed-manager] => (item=ara) 2025-06-03 15:04:56.565969 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2025-06-03 15:04:56.565984 | orchestrator | 2025-06-03 15:04:56.565998 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2025-06-03 15:04:57.178577 | orchestrator | changed: [testbed-manager] 2025-06-03 15:04:57.178662 | orchestrator | 2025-06-03 15:04:57.178673 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2025-06-03 15:04:57.217127 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:04:57.217205 | orchestrator | 2025-06-03 15:04:57.217219 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2025-06-03 15:04:57.266930 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2025-06-03 15:04:57.267003 | orchestrator | 2025-06-03 15:04:57.267018 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2025-06-03 15:04:58.647662 | orchestrator | changed: [testbed-manager] => (item=None) 2025-06-03 15:04:58.647768 | orchestrator | changed: [testbed-manager] => (item=None) 2025-06-03 15:04:58.647783 | orchestrator | changed: [testbed-manager] 2025-06-03 15:04:58.647797 | orchestrator | 2025-06-03 15:04:58.647810 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2025-06-03 15:04:59.301959 | orchestrator | changed: [testbed-manager] 2025-06-03 15:04:59.302112 | orchestrator | 2025-06-03 15:04:59.302130 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2025-06-03 15:04:59.355622 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:04:59.355702 | orchestrator | 2025-06-03 15:04:59.355716 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2025-06-03 15:04:59.447308 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2025-06-03 15:04:59.447436 | orchestrator | 2025-06-03 15:04:59.447463 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2025-06-03 15:04:59.982092 | orchestrator | changed: [testbed-manager] 2025-06-03 15:04:59.982190 | orchestrator | 2025-06-03 15:04:59.982208 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2025-06-03 15:05:00.395209 | orchestrator | changed: [testbed-manager] 2025-06-03 15:05:00.395293 | orchestrator | 2025-06-03 15:05:00.395307 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2025-06-03 15:05:01.660969 | orchestrator | changed: [testbed-manager] => (item=conductor) 2025-06-03 15:05:01.661088 | orchestrator | changed: [testbed-manager] => (item=openstack) 2025-06-03 15:05:01.661117 | orchestrator | 2025-06-03 15:05:01.661133 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2025-06-03 15:05:02.338944 | orchestrator | changed: [testbed-manager] 2025-06-03 15:05:02.339022 | orchestrator | 2025-06-03 15:05:02.339038 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2025-06-03 15:05:02.731413 | orchestrator | ok: [testbed-manager] 2025-06-03 15:05:02.731488 | orchestrator | 2025-06-03 15:05:02.731503 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2025-06-03 15:05:03.116830 | orchestrator | changed: [testbed-manager] 2025-06-03 15:05:03.116907 | orchestrator | 2025-06-03 15:05:03.116921 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2025-06-03 15:05:03.167016 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:05:03.167092 | orchestrator | 2025-06-03 15:05:03.167115 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2025-06-03 15:05:03.236411 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2025-06-03 15:05:03.236471 | orchestrator | 2025-06-03 15:05:03.236484 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2025-06-03 15:05:03.283840 | orchestrator | ok: [testbed-manager] 2025-06-03 15:05:03.283906 | orchestrator | 2025-06-03 15:05:03.283919 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2025-06-03 15:05:05.309432 | orchestrator | changed: [testbed-manager] => (item=osism) 2025-06-03 15:05:05.309537 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2025-06-03 15:05:05.309552 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2025-06-03 15:05:05.309565 | orchestrator | 2025-06-03 15:05:05.309578 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2025-06-03 15:05:06.035201 | orchestrator | changed: [testbed-manager] 2025-06-03 15:05:06.035291 | orchestrator | 2025-06-03 15:05:06.035307 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2025-06-03 15:05:06.743675 | orchestrator | changed: [testbed-manager] 2025-06-03 15:05:06.743772 | orchestrator | 2025-06-03 15:05:06.743788 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2025-06-03 15:05:07.465838 | orchestrator | changed: [testbed-manager] 2025-06-03 15:05:07.465932 | orchestrator | 2025-06-03 15:05:07.465948 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2025-06-03 15:05:07.543492 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2025-06-03 15:05:07.543593 | orchestrator | 2025-06-03 15:05:07.543608 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2025-06-03 15:05:07.598791 | orchestrator | ok: [testbed-manager] 2025-06-03 15:05:07.598874 | orchestrator | 2025-06-03 15:05:07.598890 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2025-06-03 15:05:08.287802 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2025-06-03 15:05:08.287897 | orchestrator | 2025-06-03 15:05:08.287913 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2025-06-03 15:05:08.379021 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2025-06-03 15:05:08.379118 | orchestrator | 2025-06-03 15:05:08.379133 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2025-06-03 15:05:09.076689 | orchestrator | changed: [testbed-manager] 2025-06-03 15:05:09.076788 | orchestrator | 2025-06-03 15:05:09.076803 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2025-06-03 15:05:09.692192 | orchestrator | ok: [testbed-manager] 2025-06-03 15:05:09.692305 | orchestrator | 2025-06-03 15:05:09.692321 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2025-06-03 15:05:09.750574 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:05:09.750657 | orchestrator | 2025-06-03 15:05:09.750671 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2025-06-03 15:05:09.810175 | orchestrator | ok: [testbed-manager] 2025-06-03 15:05:09.810243 | orchestrator | 2025-06-03 15:05:09.810257 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2025-06-03 15:05:10.638648 | orchestrator | changed: [testbed-manager] 2025-06-03 15:05:10.638765 | orchestrator | 2025-06-03 15:05:10.638792 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2025-06-03 15:06:13.563574 | orchestrator | changed: [testbed-manager] 2025-06-03 15:06:13.563688 | orchestrator | 2025-06-03 15:06:13.563704 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2025-06-03 15:06:14.446272 | orchestrator | ok: [testbed-manager] 2025-06-03 15:06:14.446376 | orchestrator | 2025-06-03 15:06:14.446440 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2025-06-03 15:06:14.496707 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:06:14.496820 | orchestrator | 2025-06-03 15:06:14.496837 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2025-06-03 15:06:17.003098 | orchestrator | changed: [testbed-manager] 2025-06-03 15:06:17.003206 | orchestrator | 2025-06-03 15:06:17.003225 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2025-06-03 15:06:17.048561 | orchestrator | ok: [testbed-manager] 2025-06-03 15:06:17.048634 | orchestrator | 2025-06-03 15:06:17.048650 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-06-03 15:06:17.048663 | orchestrator | 2025-06-03 15:06:17.048674 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2025-06-03 15:06:17.098842 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:06:17.098928 | orchestrator | 2025-06-03 15:06:17.098942 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2025-06-03 15:07:17.145239 | orchestrator | Pausing for 60 seconds 2025-06-03 15:07:17.145356 | orchestrator | changed: [testbed-manager] 2025-06-03 15:07:17.145476 | orchestrator | 2025-06-03 15:07:17.145496 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2025-06-03 15:07:20.723483 | orchestrator | changed: [testbed-manager] 2025-06-03 15:07:20.723610 | orchestrator | 2025-06-03 15:07:20.723629 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2025-06-03 15:08:02.339892 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2025-06-03 15:08:02.340013 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2025-06-03 15:08:02.340029 | orchestrator | changed: [testbed-manager] 2025-06-03 15:08:02.340043 | orchestrator | 2025-06-03 15:08:02.340056 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2025-06-03 15:08:10.996294 | orchestrator | changed: [testbed-manager] 2025-06-03 15:08:10.996378 | orchestrator | 2025-06-03 15:08:10.996385 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2025-06-03 15:08:11.073994 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2025-06-03 15:08:11.074220 | orchestrator | 2025-06-03 15:08:11.074248 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-06-03 15:08:11.074272 | orchestrator | 2025-06-03 15:08:11.074293 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2025-06-03 15:08:11.129262 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:08:11.129386 | orchestrator | 2025-06-03 15:08:11.129438 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-03 15:08:11.129452 | orchestrator | testbed-manager : ok=64 changed=35 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2025-06-03 15:08:11.129464 | orchestrator | 2025-06-03 15:08:11.223529 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-06-03 15:08:11.223620 | orchestrator | + deactivate 2025-06-03 15:08:11.223635 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-06-03 15:08:11.223649 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-06-03 15:08:11.223660 | orchestrator | + export PATH 2025-06-03 15:08:11.223672 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-06-03 15:08:11.223684 | orchestrator | + '[' -n '' ']' 2025-06-03 15:08:11.223695 | orchestrator | + hash -r 2025-06-03 15:08:11.223706 | orchestrator | + '[' -n '' ']' 2025-06-03 15:08:11.223717 | orchestrator | + unset VIRTUAL_ENV 2025-06-03 15:08:11.223728 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-06-03 15:08:11.223763 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-06-03 15:08:11.223775 | orchestrator | + unset -f deactivate 2025-06-03 15:08:11.223786 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2025-06-03 15:08:11.230257 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-06-03 15:08:11.230283 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-06-03 15:08:11.230294 | orchestrator | + local max_attempts=60 2025-06-03 15:08:11.230306 | orchestrator | + local name=ceph-ansible 2025-06-03 15:08:11.230317 | orchestrator | + local attempt_num=1 2025-06-03 15:08:11.231338 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-03 15:08:11.265154 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-06-03 15:08:11.265203 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-06-03 15:08:11.265216 | orchestrator | + local max_attempts=60 2025-06-03 15:08:11.265227 | orchestrator | + local name=kolla-ansible 2025-06-03 15:08:11.265239 | orchestrator | + local attempt_num=1 2025-06-03 15:08:11.265666 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-06-03 15:08:11.293526 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-06-03 15:08:11.293564 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-06-03 15:08:11.293575 | orchestrator | + local max_attempts=60 2025-06-03 15:08:11.293587 | orchestrator | + local name=osism-ansible 2025-06-03 15:08:11.293598 | orchestrator | + local attempt_num=1 2025-06-03 15:08:11.294527 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-06-03 15:08:11.329606 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-06-03 15:08:11.329649 | orchestrator | + [[ true == \t\r\u\e ]] 2025-06-03 15:08:11.329661 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-06-03 15:08:11.982998 | orchestrator | + docker compose --project-directory /opt/manager ps 2025-06-03 15:08:12.149643 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-06-03 15:08:12.149742 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" ceph-ansible About a minute ago Up About a minute (healthy) 2025-06-03 15:08:12.149757 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:2024.2 "/entrypoint.sh osis…" kolla-ansible About a minute ago Up About a minute (healthy) 2025-06-03 15:08:12.149769 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" api About a minute ago Up About a minute (healthy) 192.168.16.5:8000->8000/tcp 2025-06-03 15:08:12.149783 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.2 "sh -c '/wait && /ru…" ara-server About a minute ago Up About a minute (healthy) 8000/tcp 2025-06-03 15:08:12.149829 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" beat About a minute ago Up About a minute (healthy) 2025-06-03 15:08:12.149842 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" flower About a minute ago Up About a minute (healthy) 2025-06-03 15:08:12.149853 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" inventory_reconciler About a minute ago Up 51 seconds (healthy) 2025-06-03 15:08:12.149864 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" listener About a minute ago Up About a minute (healthy) 2025-06-03 15:08:12.149875 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.7.2 "docker-entrypoint.s…" mariadb About a minute ago Up About a minute (healthy) 3306/tcp 2025-06-03 15:08:12.149886 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" openstack About a minute ago Up About a minute (healthy) 2025-06-03 15:08:12.149897 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.4-alpine "docker-entrypoint.s…" redis About a minute ago Up About a minute (healthy) 6379/tcp 2025-06-03 15:08:12.149908 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" osism-ansible About a minute ago Up About a minute (healthy) 2025-06-03 15:08:12.149918 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" osism-kubernetes About a minute ago Up About a minute (healthy) 2025-06-03 15:08:12.149929 | orchestrator | osismclient registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" osismclient About a minute ago Up About a minute (healthy) 2025-06-03 15:08:12.159273 | orchestrator | ++ semver latest 7.0.0 2025-06-03 15:08:12.216342 | orchestrator | + [[ -1 -ge 0 ]] 2025-06-03 15:08:12.216450 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-06-03 15:08:12.216467 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2025-06-03 15:08:12.219871 | orchestrator | + osism apply resolvconf -l testbed-manager 2025-06-03 15:08:13.923350 | orchestrator | Registering Redlock._acquired_script 2025-06-03 15:08:13.923500 | orchestrator | Registering Redlock._extend_script 2025-06-03 15:08:13.923515 | orchestrator | Registering Redlock._release_script 2025-06-03 15:08:14.127840 | orchestrator | 2025-06-03 15:08:14 | INFO  | Task 7a578bb9-389b-47ca-98e8-5f14204ce275 (resolvconf) was prepared for execution. 2025-06-03 15:08:14.127926 | orchestrator | 2025-06-03 15:08:14 | INFO  | It takes a moment until task 7a578bb9-389b-47ca-98e8-5f14204ce275 (resolvconf) has been started and output is visible here. 2025-06-03 15:08:17.947712 | orchestrator | 2025-06-03 15:08:17.947835 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2025-06-03 15:08:17.948898 | orchestrator | 2025-06-03 15:08:17.949884 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-03 15:08:17.951370 | orchestrator | Tuesday 03 June 2025 15:08:17 +0000 (0:00:00.146) 0:00:00.146 ********** 2025-06-03 15:08:21.356007 | orchestrator | ok: [testbed-manager] 2025-06-03 15:08:21.356644 | orchestrator | 2025-06-03 15:08:21.357448 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-06-03 15:08:21.358443 | orchestrator | Tuesday 03 June 2025 15:08:21 +0000 (0:00:03.410) 0:00:03.557 ********** 2025-06-03 15:08:21.417527 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:08:21.418205 | orchestrator | 2025-06-03 15:08:21.419233 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-06-03 15:08:21.419584 | orchestrator | Tuesday 03 June 2025 15:08:21 +0000 (0:00:00.061) 0:00:03.619 ********** 2025-06-03 15:08:21.497635 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2025-06-03 15:08:21.498065 | orchestrator | 2025-06-03 15:08:21.498633 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-06-03 15:08:21.499462 | orchestrator | Tuesday 03 June 2025 15:08:21 +0000 (0:00:00.080) 0:00:03.700 ********** 2025-06-03 15:08:21.582441 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2025-06-03 15:08:21.582932 | orchestrator | 2025-06-03 15:08:21.583803 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-06-03 15:08:21.584647 | orchestrator | Tuesday 03 June 2025 15:08:21 +0000 (0:00:00.084) 0:00:03.784 ********** 2025-06-03 15:08:22.737009 | orchestrator | ok: [testbed-manager] 2025-06-03 15:08:22.737337 | orchestrator | 2025-06-03 15:08:22.737999 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-06-03 15:08:22.738707 | orchestrator | Tuesday 03 June 2025 15:08:22 +0000 (0:00:01.152) 0:00:04.937 ********** 2025-06-03 15:08:22.797869 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:08:22.800624 | orchestrator | 2025-06-03 15:08:22.801577 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-06-03 15:08:22.801875 | orchestrator | Tuesday 03 June 2025 15:08:22 +0000 (0:00:00.062) 0:00:05.000 ********** 2025-06-03 15:08:23.302246 | orchestrator | ok: [testbed-manager] 2025-06-03 15:08:23.302358 | orchestrator | 2025-06-03 15:08:23.303331 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-06-03 15:08:23.304140 | orchestrator | Tuesday 03 June 2025 15:08:23 +0000 (0:00:00.502) 0:00:05.502 ********** 2025-06-03 15:08:23.392131 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:08:23.392327 | orchestrator | 2025-06-03 15:08:23.393408 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-06-03 15:08:23.394159 | orchestrator | Tuesday 03 June 2025 15:08:23 +0000 (0:00:00.089) 0:00:05.592 ********** 2025-06-03 15:08:23.908846 | orchestrator | changed: [testbed-manager] 2025-06-03 15:08:23.909839 | orchestrator | 2025-06-03 15:08:23.910182 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-06-03 15:08:23.911115 | orchestrator | Tuesday 03 June 2025 15:08:23 +0000 (0:00:00.517) 0:00:06.110 ********** 2025-06-03 15:08:24.916628 | orchestrator | changed: [testbed-manager] 2025-06-03 15:08:24.916966 | orchestrator | 2025-06-03 15:08:24.918246 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-06-03 15:08:24.919543 | orchestrator | Tuesday 03 June 2025 15:08:24 +0000 (0:00:01.007) 0:00:07.117 ********** 2025-06-03 15:08:25.880365 | orchestrator | ok: [testbed-manager] 2025-06-03 15:08:25.880875 | orchestrator | 2025-06-03 15:08:25.882216 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-06-03 15:08:25.883258 | orchestrator | Tuesday 03 June 2025 15:08:25 +0000 (0:00:00.962) 0:00:08.080 ********** 2025-06-03 15:08:25.956845 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2025-06-03 15:08:25.958003 | orchestrator | 2025-06-03 15:08:25.958768 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-06-03 15:08:25.959658 | orchestrator | Tuesday 03 June 2025 15:08:25 +0000 (0:00:00.078) 0:00:08.158 ********** 2025-06-03 15:08:26.990339 | orchestrator | changed: [testbed-manager] 2025-06-03 15:08:26.990455 | orchestrator | 2025-06-03 15:08:26.990702 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-03 15:08:26.991274 | orchestrator | 2025-06-03 15:08:26 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-03 15:08:26.991378 | orchestrator | 2025-06-03 15:08:26 | INFO  | Please wait and do not abort execution. 2025-06-03 15:08:26.991495 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-03 15:08:26.991740 | orchestrator | 2025-06-03 15:08:26.992568 | orchestrator | 2025-06-03 15:08:26.993559 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-03 15:08:26.994276 | orchestrator | Tuesday 03 June 2025 15:08:26 +0000 (0:00:01.032) 0:00:09.190 ********** 2025-06-03 15:08:26.995258 | orchestrator | =============================================================================== 2025-06-03 15:08:26.995958 | orchestrator | Gathering Facts --------------------------------------------------------- 3.41s 2025-06-03 15:08:26.996843 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.15s 2025-06-03 15:08:26.997357 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.03s 2025-06-03 15:08:26.997988 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.01s 2025-06-03 15:08:26.998769 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 0.96s 2025-06-03 15:08:26.999374 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.52s 2025-06-03 15:08:27.000142 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.50s 2025-06-03 15:08:27.000807 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.09s 2025-06-03 15:08:27.001250 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.08s 2025-06-03 15:08:27.001681 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.08s 2025-06-03 15:08:27.002319 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.08s 2025-06-03 15:08:27.002684 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.06s 2025-06-03 15:08:27.003313 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.06s 2025-06-03 15:08:27.285989 | orchestrator | + osism apply sshconfig 2025-06-03 15:08:28.711109 | orchestrator | Registering Redlock._acquired_script 2025-06-03 15:08:28.711173 | orchestrator | Registering Redlock._extend_script 2025-06-03 15:08:28.711185 | orchestrator | Registering Redlock._release_script 2025-06-03 15:08:28.762219 | orchestrator | 2025-06-03 15:08:28 | INFO  | Task 0e25267e-c152-4fda-8602-6cbaa45ac71e (sshconfig) was prepared for execution. 2025-06-03 15:08:28.762286 | orchestrator | 2025-06-03 15:08:28 | INFO  | It takes a moment until task 0e25267e-c152-4fda-8602-6cbaa45ac71e (sshconfig) has been started and output is visible here. 2025-06-03 15:08:32.266939 | orchestrator | 2025-06-03 15:08:32.267926 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2025-06-03 15:08:32.267966 | orchestrator | 2025-06-03 15:08:32.268263 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2025-06-03 15:08:32.268833 | orchestrator | Tuesday 03 June 2025 15:08:32 +0000 (0:00:00.143) 0:00:00.143 ********** 2025-06-03 15:08:32.679831 | orchestrator | ok: [testbed-manager] 2025-06-03 15:08:32.680346 | orchestrator | 2025-06-03 15:08:32.681138 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2025-06-03 15:08:32.681740 | orchestrator | Tuesday 03 June 2025 15:08:32 +0000 (0:00:00.415) 0:00:00.559 ********** 2025-06-03 15:08:33.098186 | orchestrator | changed: [testbed-manager] 2025-06-03 15:08:33.098289 | orchestrator | 2025-06-03 15:08:33.098566 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2025-06-03 15:08:33.099254 | orchestrator | Tuesday 03 June 2025 15:08:33 +0000 (0:00:00.416) 0:00:00.976 ********** 2025-06-03 15:08:38.351920 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2025-06-03 15:08:38.352031 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2025-06-03 15:08:38.352764 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2025-06-03 15:08:38.352822 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2025-06-03 15:08:38.353151 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2025-06-03 15:08:38.353561 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2025-06-03 15:08:38.353921 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2025-06-03 15:08:38.353941 | orchestrator | 2025-06-03 15:08:38.354507 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2025-06-03 15:08:38.354757 | orchestrator | Tuesday 03 June 2025 15:08:38 +0000 (0:00:05.253) 0:00:06.229 ********** 2025-06-03 15:08:38.417306 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:08:38.417744 | orchestrator | 2025-06-03 15:08:38.417926 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2025-06-03 15:08:38.418536 | orchestrator | Tuesday 03 June 2025 15:08:38 +0000 (0:00:00.066) 0:00:06.296 ********** 2025-06-03 15:08:38.991521 | orchestrator | changed: [testbed-manager] 2025-06-03 15:08:38.992109 | orchestrator | 2025-06-03 15:08:38.994339 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-03 15:08:38.994435 | orchestrator | 2025-06-03 15:08:38 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-03 15:08:38.994459 | orchestrator | 2025-06-03 15:08:38 | INFO  | Please wait and do not abort execution. 2025-06-03 15:08:38.995463 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-03 15:08:38.995755 | orchestrator | 2025-06-03 15:08:38.997033 | orchestrator | 2025-06-03 15:08:38.997764 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-03 15:08:38.998194 | orchestrator | Tuesday 03 June 2025 15:08:38 +0000 (0:00:00.574) 0:00:06.870 ********** 2025-06-03 15:08:38.999380 | orchestrator | =============================================================================== 2025-06-03 15:08:39.000547 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.25s 2025-06-03 15:08:39.000887 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.57s 2025-06-03 15:08:39.001382 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.42s 2025-06-03 15:08:39.002125 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.42s 2025-06-03 15:08:39.002816 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.07s 2025-06-03 15:08:39.471769 | orchestrator | + osism apply known-hosts 2025-06-03 15:08:41.085945 | orchestrator | Registering Redlock._acquired_script 2025-06-03 15:08:41.086108 | orchestrator | Registering Redlock._extend_script 2025-06-03 15:08:41.086126 | orchestrator | Registering Redlock._release_script 2025-06-03 15:08:41.143197 | orchestrator | 2025-06-03 15:08:41 | INFO  | Task 3a1569a7-6d98-4b82-abd1-62574a077151 (known-hosts) was prepared for execution. 2025-06-03 15:08:41.143293 | orchestrator | 2025-06-03 15:08:41 | INFO  | It takes a moment until task 3a1569a7-6d98-4b82-abd1-62574a077151 (known-hosts) has been started and output is visible here. 2025-06-03 15:08:44.969129 | orchestrator | 2025-06-03 15:08:44.970428 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2025-06-03 15:08:44.970551 | orchestrator | 2025-06-03 15:08:44.971937 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2025-06-03 15:08:44.972013 | orchestrator | Tuesday 03 June 2025 15:08:44 +0000 (0:00:00.165) 0:00:00.165 ********** 2025-06-03 15:08:50.925480 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-06-03 15:08:50.925832 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-06-03 15:08:50.925975 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-06-03 15:08:50.927466 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-06-03 15:08:50.928999 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-06-03 15:08:50.929455 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-06-03 15:08:50.929796 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-06-03 15:08:50.930499 | orchestrator | 2025-06-03 15:08:50.930883 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2025-06-03 15:08:50.931257 | orchestrator | Tuesday 03 June 2025 15:08:50 +0000 (0:00:05.958) 0:00:06.124 ********** 2025-06-03 15:08:51.112486 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-06-03 15:08:51.112675 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-06-03 15:08:51.112690 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-06-03 15:08:51.112697 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-06-03 15:08:51.113194 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-06-03 15:08:51.114471 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-06-03 15:08:51.114493 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-06-03 15:08:51.114500 | orchestrator | 2025-06-03 15:08:51.114509 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-03 15:08:51.114851 | orchestrator | Tuesday 03 June 2025 15:08:51 +0000 (0:00:00.187) 0:00:06.311 ********** 2025-06-03 15:08:52.289967 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDMIrx6pBxLcAz0IvLV3J9fXLtAuxY/jlzXKVRfjm+F1+0BQlyBRCpayIaD91A/w/DZ12dNeYJpIdDEzBkomMQ/rqFgoCrgA13dZRbIQZQooR/BU0pB/W5YvTNWKomfCT1/VgBG2tnk4+fUxWSF+SiWo4d5hYYX2ZVdwKtxkUdelxAuXL69LdlnuHVBpJbuSUFM3z3XYUeuVw176x6CfZTesTS+5Ut80BJcO+5retmBYdq7znCMAdA5xI2gwGUToge0oI1Uu+h3i+qI0iYWyOtHFi8C2c63c6wf8AdjPJ/VJ8ckL782i6lcnYk18zt1vV1VSRXze5a9sSi2czSX4bligW1sXtNEBIJG4dfX7S8fCDznvuncO+EGen+mfCX2VblI2XR81fztGUWsjTvixrhTdu3i6KFAUtG6W3Oen9t/BeGJY6kS8JpXkSpbJd+dahMx6X3hfFVixlLbW62wx0J1tMLp+q4mGlwLmm/T387JCcrd3kFJFBE67xrwR9rFZ+U=) 2025-06-03 15:08:52.290250 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCdFXTPNbqpv4FWbdXqIF8C9bgxcABypizk+q3O3QQjGsNyMbCr76QT29kXPk9A8RcM2usb1Ns4Lr+cVEyW8J5I=) 2025-06-03 15:08:52.290780 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKcXh0jwTbjKKksEWv3888V2edom36GFJg5l1qlYULNC) 2025-06-03 15:08:52.291520 | orchestrator | 2025-06-03 15:08:52.292350 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-03 15:08:52.293028 | orchestrator | Tuesday 03 June 2025 15:08:52 +0000 (0:00:01.178) 0:00:07.489 ********** 2025-06-03 15:08:53.337985 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCfuJTvZ2zGiE9/3I9KcneBGo9LtzxXWiuxdXGI8KtRH2glwXfLnlqXnHe1gGX/UeWwqRtnt0jEPgmyIvcVxnTMFCOAD7X68ix02qrphxh6fu5VFAf5JSqh+01M2BxA3prPg8NY2YBeovoQEZWjpUoVSx37SjCQ3Z5lhzLDUo/x6hO6zmnGsl8dWNtbE2qzv6DpBm3iSoWmnJYfGM7aPzMA+otGzuMU/AhkZuDOvNyIC1Is1+XAnXMjlZyAwtagUyGLyfTOopPN22kDsI9HP24MQoWxtGncrxrSYQVSwFsocqD+OfZL0GZ88clVcZOuq9Qlg9/gNNxWvpvU58cMnVGifOZ/IPpltmPQ3cGyzelzpNkNP1a36xera1oPffU7hL06d4k1mlM4zNC8IC8Fsn0KWZEL6kzOqdVMPcyEGIq07NanXkxbxvGVKvcAVHmHM47OKy384MOvtZbw+yvlwy1iSxtQRxvg99osKhKQ5HHQjuegYoYT8DodpP6rQgh3p4s=) 2025-06-03 15:08:53.338328 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBK5rhZ1h2xNPE4zqWAvD9rf0j9GTlmFvjeaykBhSKEKzmIzlP9AFbuo0s4P+fJG3WNP5T3Wz+k1PATI7eUjDHtg=) 2025-06-03 15:08:53.338518 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFgPZWsz2KBkr9fCcDFnNpIbC07pSGPa3fLfC1jpemyd) 2025-06-03 15:08:53.340122 | orchestrator | 2025-06-03 15:08:53.340530 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-03 15:08:53.340869 | orchestrator | Tuesday 03 June 2025 15:08:53 +0000 (0:00:01.047) 0:00:08.536 ********** 2025-06-03 15:08:54.371127 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLHQkgGsyxSV4e+o/9ZANdXVD0NwCDdBV0Lm5T9GBJ6nZpSu8M0+Oa2mpCjTAvW81zNXvm5BN7SH0UQH+uZ9OKs=) 2025-06-03 15:08:54.372028 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDpLoAljl6rQLZ1xs60AKn7W1EcPrPgKlLpzn3O/o+jtmZplE3d2SEAGf9teqKXNv1uGALsmLABm0xo2WFqXjiU4Aik/Jy2AwPYFnfO51z7u2G50dufKbHGCxtYmndoTpXVFayCGxxi0G92VnD+OhBmV/9ZF1ZmyGUtSSV3FD0yMymPXA0dzsWV84+VxmNTl8svxhqaQASwhsDd06cPx9Yg7j2/Ks6O/h31195yM5sMdpXGD1ZVqfQIyrIo0TWzJlMFrJkaMrtG3x78OV5/KixeN6gVP47e3jlbuStJfQFPWTj968G2DRfJKRhm6EyGsIudHRWKp3GQGvaMnNw1UoIp9vfULWWrm4M+MTBmNzxiGJTbb2J3S5ZNtsW/lPBFvC1qh/3rlDLIryfLHpIJcnxQIxlUb0hbUF/4zZJxiKz581tSvaBPE6KOZ/PKgRDyCT/HwsG42pAPJ3ztDaIJIbQq6EgTECg4dsFCQ1460rsWE9XS6505N5FElnwnYKXyT/c=) 2025-06-03 15:08:54.372883 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAp6Jdgoji8v6QBcfyhexhU10STLohyNGKoCkCoD3B5E) 2025-06-03 15:08:54.373460 | orchestrator | 2025-06-03 15:08:54.373879 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-03 15:08:54.375070 | orchestrator | Tuesday 03 June 2025 15:08:54 +0000 (0:00:01.032) 0:00:09.569 ********** 2025-06-03 15:08:55.372639 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEq4sv1jNlaaW9KHOjcLFlVQ9pHHhSqsV/VDZMXerHVp) 2025-06-03 15:08:55.372745 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDDt2g7R5U0vSqN8sQ2y4eDxEy2gkt5KvQ99filZq15zwNrg1/FaXz1cBhcqw/lTbBi4jk1jnnxEz/ZeXLzJvpTw8Krwx7RaqMqkiexS0f0/4ZoXHQHhlg56k6lvDPNS87ScWNA6BGMSB6/tuM2h+fHfa50oXTqFT8rC0CtcZA6Cn2Mmkx/JG8YGsvynYszDNoRC2Ffzv7ohMAx2mAzaIt9bzl3cJ6NegIXfwSbzuZyXD7ndQG1EFv1RRpNxzNk/lwGgxUJOiqfSNE33Nf07Fa4gG6Id1OJ5t2wudM9aE9SpgUkJSf2wSblL9Y+trNtrxVVTG30IlhSnElxHUoqkbPOMaOXtEsxIJGOpm/vdGQUT9uBWXDPIZEIGkSiLKI0jO6bH16mEDU9Gb0QdJk3Gp4S+9ALOfBGE/bfUs6KATsIPEyCVH2OWxiGC7UQIr2X8ACaKoVwNQqy9C/fTE6ZBgbJWLg8P/BSoD3V0LdoCiv0P5vafoQ18oMV9iaNLFxp1mU=) 2025-06-03 15:08:55.372763 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEJ5Yx3FwLzltWkf1nDE664ZKzM4Zk1sZilQSazhJWIL+iEsI3fTDI1P1CnExueXRQKEZYzKEUW9N6JjrcXo7MI=) 2025-06-03 15:08:55.373410 | orchestrator | 2025-06-03 15:08:55.374882 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-03 15:08:55.375835 | orchestrator | Tuesday 03 June 2025 15:08:55 +0000 (0:00:01.001) 0:00:10.570 ********** 2025-06-03 15:08:56.430624 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLKSlr7Xl1jnuEWYZC7hSlFiVs68E47wNoFCraap5XFj0yX6G2blCRwwV1Up6VZwhBZIrPemYojoKdLm8H4+/0o=) 2025-06-03 15:08:56.430807 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDrEwy8p5D6GqskiprnYKfF6cGDqdeRexdkKQZ+m7gooLK0SRYEe8tn6loIZh6wzdgQP3ilXg1byEZOENriG0thSkpNPnLTKafL0EIEXiHWlHjYSuac8uIF/4C5OLqhxLsCQrBHiBO7+g39RnQTuu/QqVni4v2ZuF6qm6telP/hrrmctnc3l/ZHd0AUeysKsT9iRAPihYY5giEie3XMf0sMPzSqgcdqCXGaWwoI4xFql/x9TGYQld9QclO+yzuXLvaDP7Q2OrY/+HyoyKFWmWP4K51iZHN4vxk25sfExYem1B+LKAFFBIDmXiSfYMYLRPZOqiWlkgujPiZH09Me2fvzaMpTrgS4qt2fWDf1k0aiPhbhd05XdctPK/AbrHl9oAxdN0pQg+6olNZE1YGlcGNJYd8HgCa8NR7h6WkthW0qbu0pUR452HWieAPlBRYflYLGwfpdHWOzBM5sOeK7pUDt9oLNf6vb266VCF1x/9RwdKZUh15UfS0iKYUbNSAlztk=) 2025-06-03 15:08:56.431154 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPg5UUrFuF5c2lIsExhhY5yUoMSSXiJabSBQaL8zWqjj) 2025-06-03 15:08:56.431733 | orchestrator | 2025-06-03 15:08:56.433143 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-03 15:08:56.433483 | orchestrator | Tuesday 03 June 2025 15:08:56 +0000 (0:00:01.059) 0:00:11.629 ********** 2025-06-03 15:08:57.451320 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDKgF5wM26Rw7kXYGWwEzcRmgMCC9G/X8SIoYIyJwstpqC3VuUCqh1DXfzBy8SMUHSbRZkN0mRBiUejG7lkut5wQNWPn62oXLLgUH8zsC89e9uEE3SnkjC6YDXK8JgNM6z+JbarN/gAsowCTE9ivvga+H6TISQRyFkwNVvDC/dIhhFgwuh13vefeeLVAqhswniKzTenugX3Ilr6cizWuurljb22HqwmVEWwooJPq8wcKxCNlx8GX0ddphS0ieK09wXQ7Z93zm5kzk0ds14YqBaZAU6ybKQctCEWdhMxKxcKJ8sNuqrGDWjA+rLTh87LLtIsSUEN4CW4ouUmBVBWuErOikQKhKKKFkOJTOHeZGRx7MN6A95B8NVFoju4sO2hyRZSGzW41ftLXRqAj0hIqWgD4PIaF17DDPyYQkcJ9YBI7o2OjMVeO64VRFhSRAtUcdkRjIwY8Gz2ZLPt4pMx65g+CjOsEb3mpOvT6zB+04ummYIBWD3dBL/ptWkYEo2CGt0=) 2025-06-03 15:08:57.451856 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBH5br0BzkZmgIAIHFoE5FBuslCoEy9w+zVxNIIAE1mKKTvX8IwmOX824qmjipQMqiscJNL/iqoEohgT8ruMNjB8=) 2025-06-03 15:08:57.452164 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGYlTWeumiR8sZJBO8l5k36jEfgwrtr3Nqcdi+YFuDp2) 2025-06-03 15:08:57.452801 | orchestrator | 2025-06-03 15:08:57.453506 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-03 15:08:57.454146 | orchestrator | Tuesday 03 June 2025 15:08:57 +0000 (0:00:01.021) 0:00:12.650 ********** 2025-06-03 15:08:58.519986 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCfnFrN7tt/dYi9qmgtb3zNKG88YMQMtd2X0iQ+IGk/TJvkHg3wOaXZkUs8b5pWuQH7pPTRlxUd/oispCfHFtfiDsCINubCtNSywtLC4HU+xYNfsEnW6GrL3xCYmd7klzalsOtHC7a7Ktb46tuRYBj4pYkWuRoF3Ch+YLbY1hJ/R6GpZopdiH3ewVAe64ai5d0O3ppUqaMHbOOFA37EDOQmLxy/BNptSsX9XM21NxuFXxuUQunVhH3L/41n7f9h8wXtSfdX2ZEU/MmAFYJYhc6b06cxc2ZsuYVQNQG+nyLf+Ei73+tNaGo0HvsPPxZSJdejX5kyetAhOKyZawYJnTlnk/HcuMtDimiSeKr3MrgKkyN1p1ujMHc/QsiFbnOv2yzlgGQ5H9QBLJUstR138IB3HGy2YWafldycZwjoXkUQunjbFqn3axx9m7Ddh/QPfwVuaXv0dfE99t5lt8Y4dHMKUuKydEjvMn6ixTyN/236+j5kIhWwyf92TqFz1egmL0U=) 2025-06-03 15:08:58.520109 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJRAm5Fyptlnz5lPi+birLA0ZJX/KMldW8DNbH2SzVf25KoBsGOHdcNCu70G4vJ0jHvsQfVYlWFb2HsDR6f8AMc=) 2025-06-03 15:08:58.520871 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFvFpv+WglLUvszw03woFMliT2aMSSlL2b2OfeL5D9BP) 2025-06-03 15:08:58.521194 | orchestrator | 2025-06-03 15:08:58.521839 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2025-06-03 15:08:58.522487 | orchestrator | Tuesday 03 June 2025 15:08:58 +0000 (0:00:01.065) 0:00:13.716 ********** 2025-06-03 15:09:03.789824 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-06-03 15:09:03.790627 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-06-03 15:09:03.791120 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-06-03 15:09:03.792281 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-06-03 15:09:03.792802 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-06-03 15:09:03.794693 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-06-03 15:09:03.795093 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-06-03 15:09:03.795620 | orchestrator | 2025-06-03 15:09:03.796160 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2025-06-03 15:09:03.796595 | orchestrator | Tuesday 03 June 2025 15:09:03 +0000 (0:00:05.272) 0:00:18.989 ********** 2025-06-03 15:09:03.950302 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-06-03 15:09:03.951195 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-06-03 15:09:03.952857 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-06-03 15:09:03.953626 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-06-03 15:09:03.954461 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-06-03 15:09:03.955407 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-06-03 15:09:03.955953 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-06-03 15:09:03.956926 | orchestrator | 2025-06-03 15:09:03.957137 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-03 15:09:03.957614 | orchestrator | Tuesday 03 June 2025 15:09:03 +0000 (0:00:00.160) 0:00:19.149 ********** 2025-06-03 15:09:04.988941 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKcXh0jwTbjKKksEWv3888V2edom36GFJg5l1qlYULNC) 2025-06-03 15:09:04.989836 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDMIrx6pBxLcAz0IvLV3J9fXLtAuxY/jlzXKVRfjm+F1+0BQlyBRCpayIaD91A/w/DZ12dNeYJpIdDEzBkomMQ/rqFgoCrgA13dZRbIQZQooR/BU0pB/W5YvTNWKomfCT1/VgBG2tnk4+fUxWSF+SiWo4d5hYYX2ZVdwKtxkUdelxAuXL69LdlnuHVBpJbuSUFM3z3XYUeuVw176x6CfZTesTS+5Ut80BJcO+5retmBYdq7znCMAdA5xI2gwGUToge0oI1Uu+h3i+qI0iYWyOtHFi8C2c63c6wf8AdjPJ/VJ8ckL782i6lcnYk18zt1vV1VSRXze5a9sSi2czSX4bligW1sXtNEBIJG4dfX7S8fCDznvuncO+EGen+mfCX2VblI2XR81fztGUWsjTvixrhTdu3i6KFAUtG6W3Oen9t/BeGJY6kS8JpXkSpbJd+dahMx6X3hfFVixlLbW62wx0J1tMLp+q4mGlwLmm/T387JCcrd3kFJFBE67xrwR9rFZ+U=) 2025-06-03 15:09:04.990418 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCdFXTPNbqpv4FWbdXqIF8C9bgxcABypizk+q3O3QQjGsNyMbCr76QT29kXPk9A8RcM2usb1Ns4Lr+cVEyW8J5I=) 2025-06-03 15:09:04.991030 | orchestrator | 2025-06-03 15:09:04.991908 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-03 15:09:04.992458 | orchestrator | Tuesday 03 June 2025 15:09:04 +0000 (0:00:01.037) 0:00:20.186 ********** 2025-06-03 15:09:06.008937 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCfuJTvZ2zGiE9/3I9KcneBGo9LtzxXWiuxdXGI8KtRH2glwXfLnlqXnHe1gGX/UeWwqRtnt0jEPgmyIvcVxnTMFCOAD7X68ix02qrphxh6fu5VFAf5JSqh+01M2BxA3prPg8NY2YBeovoQEZWjpUoVSx37SjCQ3Z5lhzLDUo/x6hO6zmnGsl8dWNtbE2qzv6DpBm3iSoWmnJYfGM7aPzMA+otGzuMU/AhkZuDOvNyIC1Is1+XAnXMjlZyAwtagUyGLyfTOopPN22kDsI9HP24MQoWxtGncrxrSYQVSwFsocqD+OfZL0GZ88clVcZOuq9Qlg9/gNNxWvpvU58cMnVGifOZ/IPpltmPQ3cGyzelzpNkNP1a36xera1oPffU7hL06d4k1mlM4zNC8IC8Fsn0KWZEL6kzOqdVMPcyEGIq07NanXkxbxvGVKvcAVHmHM47OKy384MOvtZbw+yvlwy1iSxtQRxvg99osKhKQ5HHQjuegYoYT8DodpP6rQgh3p4s=) 2025-06-03 15:09:06.009044 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBK5rhZ1h2xNPE4zqWAvD9rf0j9GTlmFvjeaykBhSKEKzmIzlP9AFbuo0s4P+fJG3WNP5T3Wz+k1PATI7eUjDHtg=) 2025-06-03 15:09:06.009723 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFgPZWsz2KBkr9fCcDFnNpIbC07pSGPa3fLfC1jpemyd) 2025-06-03 15:09:06.011125 | orchestrator | 2025-06-03 15:09:06.012144 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-03 15:09:06.013259 | orchestrator | Tuesday 03 June 2025 15:09:06 +0000 (0:00:01.020) 0:00:21.207 ********** 2025-06-03 15:09:07.052490 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDpLoAljl6rQLZ1xs60AKn7W1EcPrPgKlLpzn3O/o+jtmZplE3d2SEAGf9teqKXNv1uGALsmLABm0xo2WFqXjiU4Aik/Jy2AwPYFnfO51z7u2G50dufKbHGCxtYmndoTpXVFayCGxxi0G92VnD+OhBmV/9ZF1ZmyGUtSSV3FD0yMymPXA0dzsWV84+VxmNTl8svxhqaQASwhsDd06cPx9Yg7j2/Ks6O/h31195yM5sMdpXGD1ZVqfQIyrIo0TWzJlMFrJkaMrtG3x78OV5/KixeN6gVP47e3jlbuStJfQFPWTj968G2DRfJKRhm6EyGsIudHRWKp3GQGvaMnNw1UoIp9vfULWWrm4M+MTBmNzxiGJTbb2J3S5ZNtsW/lPBFvC1qh/3rlDLIryfLHpIJcnxQIxlUb0hbUF/4zZJxiKz581tSvaBPE6KOZ/PKgRDyCT/HwsG42pAPJ3ztDaIJIbQq6EgTECg4dsFCQ1460rsWE9XS6505N5FElnwnYKXyT/c=) 2025-06-03 15:09:07.052723 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLHQkgGsyxSV4e+o/9ZANdXVD0NwCDdBV0Lm5T9GBJ6nZpSu8M0+Oa2mpCjTAvW81zNXvm5BN7SH0UQH+uZ9OKs=) 2025-06-03 15:09:07.052751 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAp6Jdgoji8v6QBcfyhexhU10STLohyNGKoCkCoD3B5E) 2025-06-03 15:09:07.053680 | orchestrator | 2025-06-03 15:09:07.053846 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-03 15:09:07.054666 | orchestrator | Tuesday 03 June 2025 15:09:07 +0000 (0:00:01.042) 0:00:22.249 ********** 2025-06-03 15:09:08.131251 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDDt2g7R5U0vSqN8sQ2y4eDxEy2gkt5KvQ99filZq15zwNrg1/FaXz1cBhcqw/lTbBi4jk1jnnxEz/ZeXLzJvpTw8Krwx7RaqMqkiexS0f0/4ZoXHQHhlg56k6lvDPNS87ScWNA6BGMSB6/tuM2h+fHfa50oXTqFT8rC0CtcZA6Cn2Mmkx/JG8YGsvynYszDNoRC2Ffzv7ohMAx2mAzaIt9bzl3cJ6NegIXfwSbzuZyXD7ndQG1EFv1RRpNxzNk/lwGgxUJOiqfSNE33Nf07Fa4gG6Id1OJ5t2wudM9aE9SpgUkJSf2wSblL9Y+trNtrxVVTG30IlhSnElxHUoqkbPOMaOXtEsxIJGOpm/vdGQUT9uBWXDPIZEIGkSiLKI0jO6bH16mEDU9Gb0QdJk3Gp4S+9ALOfBGE/bfUs6KATsIPEyCVH2OWxiGC7UQIr2X8ACaKoVwNQqy9C/fTE6ZBgbJWLg8P/BSoD3V0LdoCiv0P5vafoQ18oMV9iaNLFxp1mU=) 2025-06-03 15:09:08.131825 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEJ5Yx3FwLzltWkf1nDE664ZKzM4Zk1sZilQSazhJWIL+iEsI3fTDI1P1CnExueXRQKEZYzKEUW9N6JjrcXo7MI=) 2025-06-03 15:09:08.132042 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEq4sv1jNlaaW9KHOjcLFlVQ9pHHhSqsV/VDZMXerHVp) 2025-06-03 15:09:08.132940 | orchestrator | 2025-06-03 15:09:08.133506 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-03 15:09:08.134127 | orchestrator | Tuesday 03 June 2025 15:09:08 +0000 (0:00:01.079) 0:00:23.329 ********** 2025-06-03 15:09:09.222288 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPg5UUrFuF5c2lIsExhhY5yUoMSSXiJabSBQaL8zWqjj) 2025-06-03 15:09:09.222782 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDrEwy8p5D6GqskiprnYKfF6cGDqdeRexdkKQZ+m7gooLK0SRYEe8tn6loIZh6wzdgQP3ilXg1byEZOENriG0thSkpNPnLTKafL0EIEXiHWlHjYSuac8uIF/4C5OLqhxLsCQrBHiBO7+g39RnQTuu/QqVni4v2ZuF6qm6telP/hrrmctnc3l/ZHd0AUeysKsT9iRAPihYY5giEie3XMf0sMPzSqgcdqCXGaWwoI4xFql/x9TGYQld9QclO+yzuXLvaDP7Q2OrY/+HyoyKFWmWP4K51iZHN4vxk25sfExYem1B+LKAFFBIDmXiSfYMYLRPZOqiWlkgujPiZH09Me2fvzaMpTrgS4qt2fWDf1k0aiPhbhd05XdctPK/AbrHl9oAxdN0pQg+6olNZE1YGlcGNJYd8HgCa8NR7h6WkthW0qbu0pUR452HWieAPlBRYflYLGwfpdHWOzBM5sOeK7pUDt9oLNf6vb266VCF1x/9RwdKZUh15UfS0iKYUbNSAlztk=) 2025-06-03 15:09:09.222821 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLKSlr7Xl1jnuEWYZC7hSlFiVs68E47wNoFCraap5XFj0yX6G2blCRwwV1Up6VZwhBZIrPemYojoKdLm8H4+/0o=) 2025-06-03 15:09:09.224558 | orchestrator | 2025-06-03 15:09:09.225156 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-03 15:09:09.225945 | orchestrator | Tuesday 03 June 2025 15:09:09 +0000 (0:00:01.090) 0:00:24.420 ********** 2025-06-03 15:09:10.320862 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGYlTWeumiR8sZJBO8l5k36jEfgwrtr3Nqcdi+YFuDp2) 2025-06-03 15:09:10.323172 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDKgF5wM26Rw7kXYGWwEzcRmgMCC9G/X8SIoYIyJwstpqC3VuUCqh1DXfzBy8SMUHSbRZkN0mRBiUejG7lkut5wQNWPn62oXLLgUH8zsC89e9uEE3SnkjC6YDXK8JgNM6z+JbarN/gAsowCTE9ivvga+H6TISQRyFkwNVvDC/dIhhFgwuh13vefeeLVAqhswniKzTenugX3Ilr6cizWuurljb22HqwmVEWwooJPq8wcKxCNlx8GX0ddphS0ieK09wXQ7Z93zm5kzk0ds14YqBaZAU6ybKQctCEWdhMxKxcKJ8sNuqrGDWjA+rLTh87LLtIsSUEN4CW4ouUmBVBWuErOikQKhKKKFkOJTOHeZGRx7MN6A95B8NVFoju4sO2hyRZSGzW41ftLXRqAj0hIqWgD4PIaF17DDPyYQkcJ9YBI7o2OjMVeO64VRFhSRAtUcdkRjIwY8Gz2ZLPt4pMx65g+CjOsEb3mpOvT6zB+04ummYIBWD3dBL/ptWkYEo2CGt0=) 2025-06-03 15:09:10.323227 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBH5br0BzkZmgIAIHFoE5FBuslCoEy9w+zVxNIIAE1mKKTvX8IwmOX824qmjipQMqiscJNL/iqoEohgT8ruMNjB8=) 2025-06-03 15:09:10.323532 | orchestrator | 2025-06-03 15:09:10.323974 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-03 15:09:10.324624 | orchestrator | Tuesday 03 June 2025 15:09:10 +0000 (0:00:01.099) 0:00:25.519 ********** 2025-06-03 15:09:11.405324 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFvFpv+WglLUvszw03woFMliT2aMSSlL2b2OfeL5D9BP) 2025-06-03 15:09:11.405547 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCfnFrN7tt/dYi9qmgtb3zNKG88YMQMtd2X0iQ+IGk/TJvkHg3wOaXZkUs8b5pWuQH7pPTRlxUd/oispCfHFtfiDsCINubCtNSywtLC4HU+xYNfsEnW6GrL3xCYmd7klzalsOtHC7a7Ktb46tuRYBj4pYkWuRoF3Ch+YLbY1hJ/R6GpZopdiH3ewVAe64ai5d0O3ppUqaMHbOOFA37EDOQmLxy/BNptSsX9XM21NxuFXxuUQunVhH3L/41n7f9h8wXtSfdX2ZEU/MmAFYJYhc6b06cxc2ZsuYVQNQG+nyLf+Ei73+tNaGo0HvsPPxZSJdejX5kyetAhOKyZawYJnTlnk/HcuMtDimiSeKr3MrgKkyN1p1ujMHc/QsiFbnOv2yzlgGQ5H9QBLJUstR138IB3HGy2YWafldycZwjoXkUQunjbFqn3axx9m7Ddh/QPfwVuaXv0dfE99t5lt8Y4dHMKUuKydEjvMn6ixTyN/236+j5kIhWwyf92TqFz1egmL0U=) 2025-06-03 15:09:11.405910 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJRAm5Fyptlnz5lPi+birLA0ZJX/KMldW8DNbH2SzVf25KoBsGOHdcNCu70G4vJ0jHvsQfVYlWFb2HsDR6f8AMc=) 2025-06-03 15:09:11.406502 | orchestrator | 2025-06-03 15:09:11.406853 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2025-06-03 15:09:11.407292 | orchestrator | Tuesday 03 June 2025 15:09:11 +0000 (0:00:01.085) 0:00:26.604 ********** 2025-06-03 15:09:11.583056 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-06-03 15:09:11.583502 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-06-03 15:09:11.583961 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-06-03 15:09:11.584450 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-06-03 15:09:11.585123 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-06-03 15:09:11.585564 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-06-03 15:09:11.586169 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-06-03 15:09:11.586779 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:09:11.588444 | orchestrator | 2025-06-03 15:09:11.588613 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2025-06-03 15:09:11.589583 | orchestrator | Tuesday 03 June 2025 15:09:11 +0000 (0:00:00.178) 0:00:26.783 ********** 2025-06-03 15:09:11.659779 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:09:11.660518 | orchestrator | 2025-06-03 15:09:11.661319 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2025-06-03 15:09:11.661622 | orchestrator | Tuesday 03 June 2025 15:09:11 +0000 (0:00:00.076) 0:00:26.859 ********** 2025-06-03 15:09:11.716500 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:09:11.716668 | orchestrator | 2025-06-03 15:09:11.717779 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2025-06-03 15:09:11.717889 | orchestrator | Tuesday 03 June 2025 15:09:11 +0000 (0:00:00.056) 0:00:26.916 ********** 2025-06-03 15:09:12.209061 | orchestrator | changed: [testbed-manager] 2025-06-03 15:09:12.213574 | orchestrator | 2025-06-03 15:09:12.215032 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-03 15:09:12.215556 | orchestrator | 2025-06-03 15:09:12 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-03 15:09:12.216367 | orchestrator | 2025-06-03 15:09:12 | INFO  | Please wait and do not abort execution. 2025-06-03 15:09:12.217516 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-03 15:09:12.219140 | orchestrator | 2025-06-03 15:09:12.221986 | orchestrator | 2025-06-03 15:09:12.222010 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-03 15:09:12.222512 | orchestrator | Tuesday 03 June 2025 15:09:12 +0000 (0:00:00.490) 0:00:27.406 ********** 2025-06-03 15:09:12.223487 | orchestrator | =============================================================================== 2025-06-03 15:09:12.223822 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 5.96s 2025-06-03 15:09:12.224618 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.27s 2025-06-03 15:09:12.225259 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.18s 2025-06-03 15:09:12.225693 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.10s 2025-06-03 15:09:12.226159 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.09s 2025-06-03 15:09:12.226578 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.09s 2025-06-03 15:09:12.227024 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.08s 2025-06-03 15:09:12.227620 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2025-06-03 15:09:12.228003 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2025-06-03 15:09:12.228476 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2025-06-03 15:09:12.228915 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2025-06-03 15:09:12.229359 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2025-06-03 15:09:12.229684 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2025-06-03 15:09:12.230087 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2025-06-03 15:09:12.230487 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2025-06-03 15:09:12.230824 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.00s 2025-06-03 15:09:12.231174 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.49s 2025-06-03 15:09:12.231570 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.19s 2025-06-03 15:09:12.231868 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.18s 2025-06-03 15:09:12.232229 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.16s 2025-06-03 15:09:12.726345 | orchestrator | + osism apply squid 2025-06-03 15:09:14.355177 | orchestrator | Registering Redlock._acquired_script 2025-06-03 15:09:14.355276 | orchestrator | Registering Redlock._extend_script 2025-06-03 15:09:14.355292 | orchestrator | Registering Redlock._release_script 2025-06-03 15:09:14.417826 | orchestrator | 2025-06-03 15:09:14 | INFO  | Task 4bf636eb-3e28-4553-9b2b-a182d6b77b15 (squid) was prepared for execution. 2025-06-03 15:09:14.417953 | orchestrator | 2025-06-03 15:09:14 | INFO  | It takes a moment until task 4bf636eb-3e28-4553-9b2b-a182d6b77b15 (squid) has been started and output is visible here. 2025-06-03 15:09:18.372176 | orchestrator | 2025-06-03 15:09:18.374993 | orchestrator | PLAY [Apply role squid] ******************************************************** 2025-06-03 15:09:18.376544 | orchestrator | 2025-06-03 15:09:18.376781 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2025-06-03 15:09:18.377263 | orchestrator | Tuesday 03 June 2025 15:09:18 +0000 (0:00:00.168) 0:00:00.168 ********** 2025-06-03 15:09:18.450200 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2025-06-03 15:09:18.450293 | orchestrator | 2025-06-03 15:09:18.450307 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2025-06-03 15:09:18.450854 | orchestrator | Tuesday 03 June 2025 15:09:18 +0000 (0:00:00.079) 0:00:00.248 ********** 2025-06-03 15:09:19.926849 | orchestrator | ok: [testbed-manager] 2025-06-03 15:09:19.926958 | orchestrator | 2025-06-03 15:09:19.927624 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2025-06-03 15:09:19.928237 | orchestrator | Tuesday 03 June 2025 15:09:19 +0000 (0:00:01.475) 0:00:01.724 ********** 2025-06-03 15:09:21.058821 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2025-06-03 15:09:21.058929 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2025-06-03 15:09:21.058945 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2025-06-03 15:09:21.059678 | orchestrator | 2025-06-03 15:09:21.060252 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2025-06-03 15:09:21.060610 | orchestrator | Tuesday 03 June 2025 15:09:21 +0000 (0:00:01.129) 0:00:02.854 ********** 2025-06-03 15:09:22.115731 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2025-06-03 15:09:22.116968 | orchestrator | 2025-06-03 15:09:22.117408 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2025-06-03 15:09:22.118148 | orchestrator | Tuesday 03 June 2025 15:09:22 +0000 (0:00:01.059) 0:00:03.913 ********** 2025-06-03 15:09:22.475218 | orchestrator | ok: [testbed-manager] 2025-06-03 15:09:22.475511 | orchestrator | 2025-06-03 15:09:22.475867 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2025-06-03 15:09:22.476202 | orchestrator | Tuesday 03 June 2025 15:09:22 +0000 (0:00:00.361) 0:00:04.275 ********** 2025-06-03 15:09:23.393521 | orchestrator | changed: [testbed-manager] 2025-06-03 15:09:23.394067 | orchestrator | 2025-06-03 15:09:23.394603 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2025-06-03 15:09:23.395326 | orchestrator | Tuesday 03 June 2025 15:09:23 +0000 (0:00:00.916) 0:00:05.191 ********** 2025-06-03 15:09:54.322723 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2025-06-03 15:09:54.322856 | orchestrator | ok: [testbed-manager] 2025-06-03 15:09:54.322885 | orchestrator | 2025-06-03 15:09:54.322905 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2025-06-03 15:09:54.322926 | orchestrator | Tuesday 03 June 2025 15:09:54 +0000 (0:00:30.924) 0:00:36.116 ********** 2025-06-03 15:10:06.799746 | orchestrator | changed: [testbed-manager] 2025-06-03 15:10:06.800289 | orchestrator | 2025-06-03 15:10:06.801264 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2025-06-03 15:10:06.803384 | orchestrator | Tuesday 03 June 2025 15:10:06 +0000 (0:00:12.479) 0:00:48.596 ********** 2025-06-03 15:11:06.883419 | orchestrator | Pausing for 60 seconds 2025-06-03 15:11:06.883527 | orchestrator | changed: [testbed-manager] 2025-06-03 15:11:06.883834 | orchestrator | 2025-06-03 15:11:06.883860 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2025-06-03 15:11:06.885187 | orchestrator | Tuesday 03 June 2025 15:11:06 +0000 (0:01:00.083) 0:01:48.679 ********** 2025-06-03 15:11:06.931466 | orchestrator | ok: [testbed-manager] 2025-06-03 15:11:06.932598 | orchestrator | 2025-06-03 15:11:06.933976 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2025-06-03 15:11:06.934657 | orchestrator | Tuesday 03 June 2025 15:11:06 +0000 (0:00:00.050) 0:01:48.730 ********** 2025-06-03 15:11:07.543639 | orchestrator | changed: [testbed-manager] 2025-06-03 15:11:07.544503 | orchestrator | 2025-06-03 15:11:07.547050 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-03 15:11:07.547433 | orchestrator | 2025-06-03 15:11:07 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-03 15:11:07.547908 | orchestrator | 2025-06-03 15:11:07 | INFO  | Please wait and do not abort execution. 2025-06-03 15:11:07.549248 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-03 15:11:07.549975 | orchestrator | 2025-06-03 15:11:07.551010 | orchestrator | 2025-06-03 15:11:07.551952 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-03 15:11:07.553188 | orchestrator | Tuesday 03 June 2025 15:11:07 +0000 (0:00:00.610) 0:01:49.341 ********** 2025-06-03 15:11:07.554088 | orchestrator | =============================================================================== 2025-06-03 15:11:07.554873 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.08s 2025-06-03 15:11:07.555529 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 30.92s 2025-06-03 15:11:07.556219 | orchestrator | osism.services.squid : Restart squid service --------------------------- 12.48s 2025-06-03 15:11:07.556913 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.48s 2025-06-03 15:11:07.557601 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.13s 2025-06-03 15:11:07.558436 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.06s 2025-06-03 15:11:07.558888 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.92s 2025-06-03 15:11:07.559507 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.61s 2025-06-03 15:11:07.560156 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.36s 2025-06-03 15:11:07.560498 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.08s 2025-06-03 15:11:07.561019 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.05s 2025-06-03 15:11:08.013805 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-06-03 15:11:08.014760 | orchestrator | ++ semver latest 9.0.0 2025-06-03 15:11:08.064044 | orchestrator | + [[ -1 -lt 0 ]] 2025-06-03 15:11:08.064136 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-06-03 15:11:08.064519 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2025-06-03 15:11:09.699721 | orchestrator | Registering Redlock._acquired_script 2025-06-03 15:11:09.699782 | orchestrator | Registering Redlock._extend_script 2025-06-03 15:11:09.699790 | orchestrator | Registering Redlock._release_script 2025-06-03 15:11:09.758416 | orchestrator | 2025-06-03 15:11:09 | INFO  | Task bda14267-eaa9-4b43-8828-36bf7d6c19b1 (operator) was prepared for execution. 2025-06-03 15:11:09.758489 | orchestrator | 2025-06-03 15:11:09 | INFO  | It takes a moment until task bda14267-eaa9-4b43-8828-36bf7d6c19b1 (operator) has been started and output is visible here. 2025-06-03 15:11:13.604303 | orchestrator | 2025-06-03 15:11:13.605923 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2025-06-03 15:11:13.606005 | orchestrator | 2025-06-03 15:11:13.607044 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-03 15:11:13.609227 | orchestrator | Tuesday 03 June 2025 15:11:13 +0000 (0:00:00.146) 0:00:00.146 ********** 2025-06-03 15:11:16.815070 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:11:16.815469 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:11:16.815796 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:11:16.816572 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:11:16.816840 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:11:16.817437 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:11:16.817849 | orchestrator | 2025-06-03 15:11:16.818406 | orchestrator | TASK [Do not require tty for all users] **************************************** 2025-06-03 15:11:16.818787 | orchestrator | Tuesday 03 June 2025 15:11:16 +0000 (0:00:03.214) 0:00:03.361 ********** 2025-06-03 15:11:17.585051 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:11:17.585817 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:11:17.587206 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:11:17.587778 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:11:17.588525 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:11:17.589505 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:11:17.590718 | orchestrator | 2025-06-03 15:11:17.590885 | orchestrator | PLAY [Apply role operator] ***************************************************** 2025-06-03 15:11:17.591878 | orchestrator | 2025-06-03 15:11:17.592166 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-06-03 15:11:17.592601 | orchestrator | Tuesday 03 June 2025 15:11:17 +0000 (0:00:00.768) 0:00:04.129 ********** 2025-06-03 15:11:17.667534 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:11:17.683999 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:11:17.707645 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:11:17.743447 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:11:17.743508 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:11:17.743860 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:11:17.744291 | orchestrator | 2025-06-03 15:11:17.744513 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-06-03 15:11:17.745045 | orchestrator | Tuesday 03 June 2025 15:11:17 +0000 (0:00:00.157) 0:00:04.287 ********** 2025-06-03 15:11:17.804085 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:11:17.828024 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:11:17.845155 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:11:17.887399 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:11:17.887992 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:11:17.888151 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:11:17.888836 | orchestrator | 2025-06-03 15:11:17.889563 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-06-03 15:11:17.890361 | orchestrator | Tuesday 03 June 2025 15:11:17 +0000 (0:00:00.145) 0:00:04.433 ********** 2025-06-03 15:11:18.526207 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:11:18.528263 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:11:18.529417 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:11:18.530514 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:11:18.531313 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:11:18.532158 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:11:18.532954 | orchestrator | 2025-06-03 15:11:18.533505 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-06-03 15:11:18.534550 | orchestrator | Tuesday 03 June 2025 15:11:18 +0000 (0:00:00.637) 0:00:05.070 ********** 2025-06-03 15:11:19.310783 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:11:19.310882 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:11:19.310893 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:11:19.311519 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:11:19.311729 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:11:19.312503 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:11:19.312804 | orchestrator | 2025-06-03 15:11:19.313239 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-06-03 15:11:19.313676 | orchestrator | Tuesday 03 June 2025 15:11:19 +0000 (0:00:00.780) 0:00:05.851 ********** 2025-06-03 15:11:20.523976 | orchestrator | changed: [testbed-node-1] => (item=adm) 2025-06-03 15:11:20.524192 | orchestrator | changed: [testbed-node-0] => (item=adm) 2025-06-03 15:11:20.524788 | orchestrator | changed: [testbed-node-2] => (item=adm) 2025-06-03 15:11:20.524884 | orchestrator | changed: [testbed-node-3] => (item=adm) 2025-06-03 15:11:20.527557 | orchestrator | changed: [testbed-node-4] => (item=adm) 2025-06-03 15:11:20.528168 | orchestrator | changed: [testbed-node-5] => (item=adm) 2025-06-03 15:11:20.530110 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2025-06-03 15:11:20.530136 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2025-06-03 15:11:20.530554 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2025-06-03 15:11:20.531948 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2025-06-03 15:11:20.532661 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2025-06-03 15:11:20.533771 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2025-06-03 15:11:20.534570 | orchestrator | 2025-06-03 15:11:20.535161 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-06-03 15:11:20.536220 | orchestrator | Tuesday 03 June 2025 15:11:20 +0000 (0:00:01.216) 0:00:07.067 ********** 2025-06-03 15:11:22.755323 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:11:22.755711 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:11:22.756819 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:11:22.758189 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:11:22.759647 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:11:22.760410 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:11:22.760729 | orchestrator | 2025-06-03 15:11:22.761509 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-06-03 15:11:22.762231 | orchestrator | Tuesday 03 June 2025 15:11:22 +0000 (0:00:02.231) 0:00:09.299 ********** 2025-06-03 15:11:23.799795 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2025-06-03 15:11:23.799913 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2025-06-03 15:11:23.799929 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2025-06-03 15:11:23.852995 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2025-06-03 15:11:23.856070 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2025-06-03 15:11:23.856158 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2025-06-03 15:11:23.856173 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2025-06-03 15:11:23.856552 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2025-06-03 15:11:23.857273 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2025-06-03 15:11:23.858103 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2025-06-03 15:11:23.858857 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2025-06-03 15:11:23.859983 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2025-06-03 15:11:23.860438 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2025-06-03 15:11:23.861210 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2025-06-03 15:11:23.861869 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2025-06-03 15:11:23.862233 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2025-06-03 15:11:23.862905 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2025-06-03 15:11:23.863299 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2025-06-03 15:11:23.863696 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2025-06-03 15:11:23.864078 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2025-06-03 15:11:23.864544 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2025-06-03 15:11:23.864931 | orchestrator | 2025-06-03 15:11:23.865433 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-06-03 15:11:23.865799 | orchestrator | Tuesday 03 June 2025 15:11:23 +0000 (0:00:01.099) 0:00:10.398 ********** 2025-06-03 15:11:24.359065 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:11:24.359543 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:11:24.360544 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:11:24.362211 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:11:24.362499 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:11:24.362527 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:11:24.363185 | orchestrator | 2025-06-03 15:11:24.363560 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-06-03 15:11:24.364469 | orchestrator | Tuesday 03 June 2025 15:11:24 +0000 (0:00:00.505) 0:00:10.903 ********** 2025-06-03 15:11:24.433903 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:11:24.454581 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:11:24.489095 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:11:24.489468 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:11:24.489778 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:11:24.490627 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:11:24.490652 | orchestrator | 2025-06-03 15:11:24.490839 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-06-03 15:11:24.491047 | orchestrator | Tuesday 03 June 2025 15:11:24 +0000 (0:00:00.131) 0:00:11.035 ********** 2025-06-03 15:11:25.126932 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-03 15:11:25.128709 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-06-03 15:11:25.128741 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:11:25.129702 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:11:25.130584 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-06-03 15:11:25.131116 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:11:25.131526 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-06-03 15:11:25.132276 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:11:25.132767 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-06-03 15:11:25.133697 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-06-03 15:11:25.134091 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:11:25.134411 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:11:25.134935 | orchestrator | 2025-06-03 15:11:25.135289 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-06-03 15:11:25.135770 | orchestrator | Tuesday 03 June 2025 15:11:25 +0000 (0:00:00.635) 0:00:11.671 ********** 2025-06-03 15:11:25.181317 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:11:25.200380 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:11:25.217179 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:11:25.241406 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:11:25.242725 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:11:25.244114 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:11:25.245285 | orchestrator | 2025-06-03 15:11:25.246126 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-06-03 15:11:25.246914 | orchestrator | Tuesday 03 June 2025 15:11:25 +0000 (0:00:00.115) 0:00:11.787 ********** 2025-06-03 15:11:25.277568 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:11:25.294189 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:11:25.331551 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:11:25.354520 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:11:25.355424 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:11:25.356065 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:11:25.357016 | orchestrator | 2025-06-03 15:11:25.357556 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-06-03 15:11:25.358510 | orchestrator | Tuesday 03 June 2025 15:11:25 +0000 (0:00:00.113) 0:00:11.900 ********** 2025-06-03 15:11:25.403169 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:11:25.419855 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:11:25.436394 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:11:25.452985 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:11:25.476370 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:11:25.477264 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:11:25.479004 | orchestrator | 2025-06-03 15:11:25.480354 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-06-03 15:11:25.481403 | orchestrator | Tuesday 03 June 2025 15:11:25 +0000 (0:00:00.121) 0:00:12.022 ********** 2025-06-03 15:11:26.047090 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:11:26.047271 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:11:26.047822 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:11:26.048898 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:11:26.049603 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:11:26.050528 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:11:26.051128 | orchestrator | 2025-06-03 15:11:26.051949 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-06-03 15:11:26.052630 | orchestrator | Tuesday 03 June 2025 15:11:26 +0000 (0:00:00.568) 0:00:12.590 ********** 2025-06-03 15:11:26.136615 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:11:26.156645 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:11:26.265986 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:11:26.267225 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:11:26.267922 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:11:26.268261 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:11:26.269488 | orchestrator | 2025-06-03 15:11:26.270532 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-03 15:11:26.270955 | orchestrator | 2025-06-03 15:11:26 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-03 15:11:26.271429 | orchestrator | 2025-06-03 15:11:26 | INFO  | Please wait and do not abort execution. 2025-06-03 15:11:26.272699 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-03 15:11:26.273479 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-03 15:11:26.274269 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-03 15:11:26.274924 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-03 15:11:26.275654 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-03 15:11:26.276872 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-03 15:11:26.277567 | orchestrator | 2025-06-03 15:11:26.278311 | orchestrator | 2025-06-03 15:11:26.279075 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-03 15:11:26.279742 | orchestrator | Tuesday 03 June 2025 15:11:26 +0000 (0:00:00.220) 0:00:12.811 ********** 2025-06-03 15:11:26.280417 | orchestrator | =============================================================================== 2025-06-03 15:11:26.280871 | orchestrator | Gathering Facts --------------------------------------------------------- 3.21s 2025-06-03 15:11:26.281956 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 2.23s 2025-06-03 15:11:26.282778 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.22s 2025-06-03 15:11:26.283516 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.10s 2025-06-03 15:11:26.284404 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.78s 2025-06-03 15:11:26.285486 | orchestrator | Do not require tty for all users ---------------------------------------- 0.77s 2025-06-03 15:11:26.286268 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.64s 2025-06-03 15:11:26.286793 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.64s 2025-06-03 15:11:26.287743 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.57s 2025-06-03 15:11:26.288533 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.51s 2025-06-03 15:11:26.289170 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.22s 2025-06-03 15:11:26.289947 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.16s 2025-06-03 15:11:26.290626 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.15s 2025-06-03 15:11:26.291722 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.13s 2025-06-03 15:11:26.293230 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.12s 2025-06-03 15:11:26.294751 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.12s 2025-06-03 15:11:26.295366 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.11s 2025-06-03 15:11:26.704564 | orchestrator | + osism apply --environment custom facts 2025-06-03 15:11:28.316613 | orchestrator | 2025-06-03 15:11:28 | INFO  | Trying to run play facts in environment custom 2025-06-03 15:11:28.321149 | orchestrator | Registering Redlock._acquired_script 2025-06-03 15:11:28.321230 | orchestrator | Registering Redlock._extend_script 2025-06-03 15:11:28.321246 | orchestrator | Registering Redlock._release_script 2025-06-03 15:11:28.378685 | orchestrator | 2025-06-03 15:11:28 | INFO  | Task 8ec1e6ea-15df-4cc3-8f49-36905169227b (facts) was prepared for execution. 2025-06-03 15:11:28.378748 | orchestrator | 2025-06-03 15:11:28 | INFO  | It takes a moment until task 8ec1e6ea-15df-4cc3-8f49-36905169227b (facts) has been started and output is visible here. 2025-06-03 15:11:32.109682 | orchestrator | 2025-06-03 15:11:32.109890 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2025-06-03 15:11:32.109913 | orchestrator | 2025-06-03 15:11:32.109925 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-06-03 15:11:32.109938 | orchestrator | Tuesday 03 June 2025 15:11:32 +0000 (0:00:00.065) 0:00:00.065 ********** 2025-06-03 15:11:33.415124 | orchestrator | ok: [testbed-manager] 2025-06-03 15:11:33.415292 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:11:33.415446 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:11:33.416051 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:11:33.417724 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:11:33.417750 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:11:33.418465 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:11:33.419228 | orchestrator | 2025-06-03 15:11:33.420143 | orchestrator | TASK [Copy fact file] ********************************************************** 2025-06-03 15:11:33.421091 | orchestrator | Tuesday 03 June 2025 15:11:33 +0000 (0:00:01.308) 0:00:01.374 ********** 2025-06-03 15:11:34.504778 | orchestrator | ok: [testbed-manager] 2025-06-03 15:11:34.505220 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:11:34.506606 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:11:34.507837 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:11:34.508177 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:11:34.509402 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:11:34.510117 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:11:34.510954 | orchestrator | 2025-06-03 15:11:34.511961 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2025-06-03 15:11:34.512544 | orchestrator | 2025-06-03 15:11:34.513314 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-06-03 15:11:34.514080 | orchestrator | Tuesday 03 June 2025 15:11:34 +0000 (0:00:01.089) 0:00:02.463 ********** 2025-06-03 15:11:34.597735 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:11:34.598285 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:11:34.598999 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:11:34.599746 | orchestrator | 2025-06-03 15:11:34.600485 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-06-03 15:11:34.600938 | orchestrator | Tuesday 03 June 2025 15:11:34 +0000 (0:00:00.093) 0:00:02.557 ********** 2025-06-03 15:11:34.776577 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:11:34.776837 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:11:34.777060 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:11:34.777499 | orchestrator | 2025-06-03 15:11:34.778191 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-06-03 15:11:34.778218 | orchestrator | Tuesday 03 June 2025 15:11:34 +0000 (0:00:00.178) 0:00:02.735 ********** 2025-06-03 15:11:34.951592 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:11:34.951714 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:11:34.951730 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:11:34.951742 | orchestrator | 2025-06-03 15:11:34.951837 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-06-03 15:11:34.952465 | orchestrator | Tuesday 03 June 2025 15:11:34 +0000 (0:00:00.173) 0:00:02.909 ********** 2025-06-03 15:11:35.065661 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-03 15:11:35.066187 | orchestrator | 2025-06-03 15:11:35.067734 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-06-03 15:11:35.068959 | orchestrator | Tuesday 03 June 2025 15:11:35 +0000 (0:00:00.115) 0:00:03.025 ********** 2025-06-03 15:11:35.481760 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:11:35.482269 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:11:35.482950 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:11:35.484176 | orchestrator | 2025-06-03 15:11:35.484824 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-06-03 15:11:35.485915 | orchestrator | Tuesday 03 June 2025 15:11:35 +0000 (0:00:00.414) 0:00:03.440 ********** 2025-06-03 15:11:35.558797 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:11:35.558956 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:11:35.559396 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:11:35.559934 | orchestrator | 2025-06-03 15:11:35.560653 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-06-03 15:11:35.560936 | orchestrator | Tuesday 03 June 2025 15:11:35 +0000 (0:00:00.079) 0:00:03.519 ********** 2025-06-03 15:11:36.584148 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:11:36.584627 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:11:36.585213 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:11:36.585787 | orchestrator | 2025-06-03 15:11:36.586799 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-06-03 15:11:36.587502 | orchestrator | Tuesday 03 June 2025 15:11:36 +0000 (0:00:01.022) 0:00:04.542 ********** 2025-06-03 15:11:37.057792 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:11:37.058388 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:11:37.058747 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:11:37.059508 | orchestrator | 2025-06-03 15:11:37.060011 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-06-03 15:11:37.060682 | orchestrator | Tuesday 03 June 2025 15:11:37 +0000 (0:00:00.474) 0:00:05.016 ********** 2025-06-03 15:11:38.174972 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:11:38.176635 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:11:38.177461 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:11:38.178062 | orchestrator | 2025-06-03 15:11:38.178614 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-06-03 15:11:38.179501 | orchestrator | Tuesday 03 June 2025 15:11:38 +0000 (0:00:01.115) 0:00:06.132 ********** 2025-06-03 15:11:50.755035 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:11:50.755147 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:11:50.755161 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:11:50.755173 | orchestrator | 2025-06-03 15:11:50.755185 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2025-06-03 15:11:50.755198 | orchestrator | Tuesday 03 June 2025 15:11:50 +0000 (0:00:12.576) 0:00:18.708 ********** 2025-06-03 15:11:50.859629 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:11:50.859791 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:11:50.861010 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:11:50.861069 | orchestrator | 2025-06-03 15:11:50.861223 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2025-06-03 15:11:50.861830 | orchestrator | Tuesday 03 June 2025 15:11:50 +0000 (0:00:00.108) 0:00:18.816 ********** 2025-06-03 15:11:58.329071 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:11:58.329167 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:11:58.329183 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:11:58.329982 | orchestrator | 2025-06-03 15:11:58.330080 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-06-03 15:11:58.330796 | orchestrator | Tuesday 03 June 2025 15:11:58 +0000 (0:00:07.470) 0:00:26.287 ********** 2025-06-03 15:11:58.779987 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:11:58.780894 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:11:58.781576 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:11:58.782082 | orchestrator | 2025-06-03 15:11:58.782582 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-06-03 15:11:58.783139 | orchestrator | Tuesday 03 June 2025 15:11:58 +0000 (0:00:00.451) 0:00:26.738 ********** 2025-06-03 15:12:02.306470 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2025-06-03 15:12:02.308469 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2025-06-03 15:12:02.309375 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2025-06-03 15:12:02.310497 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2025-06-03 15:12:02.311956 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2025-06-03 15:12:02.312604 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2025-06-03 15:12:02.313526 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2025-06-03 15:12:02.314216 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2025-06-03 15:12:02.314881 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2025-06-03 15:12:02.315151 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2025-06-03 15:12:02.315935 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2025-06-03 15:12:02.316541 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2025-06-03 15:12:02.316779 | orchestrator | 2025-06-03 15:12:02.317122 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-06-03 15:12:02.317691 | orchestrator | Tuesday 03 June 2025 15:12:02 +0000 (0:00:03.524) 0:00:30.263 ********** 2025-06-03 15:12:03.573511 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:12:03.576584 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:12:03.576621 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:12:03.576633 | orchestrator | 2025-06-03 15:12:03.576647 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-06-03 15:12:03.576975 | orchestrator | 2025-06-03 15:12:03.577630 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-06-03 15:12:03.578078 | orchestrator | Tuesday 03 June 2025 15:12:03 +0000 (0:00:01.268) 0:00:31.531 ********** 2025-06-03 15:12:07.407044 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:12:07.407136 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:12:07.407176 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:12:07.408357 | orchestrator | ok: [testbed-manager] 2025-06-03 15:12:07.408911 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:12:07.410233 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:12:07.410443 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:12:07.411509 | orchestrator | 2025-06-03 15:12:07.412355 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-03 15:12:07.412811 | orchestrator | 2025-06-03 15:12:07 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-03 15:12:07.412834 | orchestrator | 2025-06-03 15:12:07 | INFO  | Please wait and do not abort execution. 2025-06-03 15:12:07.413233 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-03 15:12:07.413981 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-03 15:12:07.414591 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-03 15:12:07.414960 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-03 15:12:07.415661 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-03 15:12:07.416565 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-03 15:12:07.417034 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-03 15:12:07.417642 | orchestrator | 2025-06-03 15:12:07.418193 | orchestrator | 2025-06-03 15:12:07.418746 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-03 15:12:07.419501 | orchestrator | Tuesday 03 June 2025 15:12:07 +0000 (0:00:03.834) 0:00:35.365 ********** 2025-06-03 15:12:07.419525 | orchestrator | =============================================================================== 2025-06-03 15:12:07.420294 | orchestrator | osism.commons.repository : Update package cache ------------------------ 12.58s 2025-06-03 15:12:07.421696 | orchestrator | Install required packages (Debian) -------------------------------------- 7.47s 2025-06-03 15:12:07.421958 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.83s 2025-06-03 15:12:07.422429 | orchestrator | Copy fact files --------------------------------------------------------- 3.52s 2025-06-03 15:12:07.422827 | orchestrator | Create custom facts directory ------------------------------------------- 1.31s 2025-06-03 15:12:07.423196 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.27s 2025-06-03 15:12:07.423837 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.12s 2025-06-03 15:12:07.424038 | orchestrator | Copy fact file ---------------------------------------------------------- 1.09s 2025-06-03 15:12:07.424908 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.02s 2025-06-03 15:12:07.425445 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.47s 2025-06-03 15:12:07.425584 | orchestrator | Create custom facts directory ------------------------------------------- 0.45s 2025-06-03 15:12:07.426103 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.41s 2025-06-03 15:12:07.426500 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.18s 2025-06-03 15:12:07.426893 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.17s 2025-06-03 15:12:07.427378 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.12s 2025-06-03 15:12:07.427794 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.11s 2025-06-03 15:12:07.428540 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.09s 2025-06-03 15:12:07.428781 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.08s 2025-06-03 15:12:07.843911 | orchestrator | + osism apply bootstrap 2025-06-03 15:12:09.484707 | orchestrator | Registering Redlock._acquired_script 2025-06-03 15:12:09.484814 | orchestrator | Registering Redlock._extend_script 2025-06-03 15:12:09.484828 | orchestrator | Registering Redlock._release_script 2025-06-03 15:12:09.543196 | orchestrator | 2025-06-03 15:12:09 | INFO  | Task 5d1dcdbf-2bcd-4ad0-baee-ecf16c09479c (bootstrap) was prepared for execution. 2025-06-03 15:12:09.543299 | orchestrator | 2025-06-03 15:12:09 | INFO  | It takes a moment until task 5d1dcdbf-2bcd-4ad0-baee-ecf16c09479c (bootstrap) has been started and output is visible here. 2025-06-03 15:12:12.830365 | orchestrator | 2025-06-03 15:12:12.831727 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2025-06-03 15:12:12.834625 | orchestrator | 2025-06-03 15:12:12.834922 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2025-06-03 15:12:12.835465 | orchestrator | Tuesday 03 June 2025 15:12:12 +0000 (0:00:00.120) 0:00:00.120 ********** 2025-06-03 15:12:12.902433 | orchestrator | ok: [testbed-manager] 2025-06-03 15:12:12.923050 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:12:12.939301 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:12:12.999683 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:12:12.999983 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:12:13.002013 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:12:13.002761 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:12:13.003788 | orchestrator | 2025-06-03 15:12:13.005475 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-06-03 15:12:13.005519 | orchestrator | 2025-06-03 15:12:13.006245 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-06-03 15:12:13.006917 | orchestrator | Tuesday 03 June 2025 15:12:12 +0000 (0:00:00.172) 0:00:00.293 ********** 2025-06-03 15:12:16.344391 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:12:16.344746 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:12:16.347569 | orchestrator | ok: [testbed-manager] 2025-06-03 15:12:16.347628 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:12:16.348465 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:12:16.349216 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:12:16.350114 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:12:16.350936 | orchestrator | 2025-06-03 15:12:16.351829 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2025-06-03 15:12:16.352449 | orchestrator | 2025-06-03 15:12:16.353165 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-06-03 15:12:16.353813 | orchestrator | Tuesday 03 June 2025 15:12:16 +0000 (0:00:03.345) 0:00:03.639 ********** 2025-06-03 15:12:16.430469 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-06-03 15:12:16.431267 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-06-03 15:12:16.431300 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2025-06-03 15:12:16.467906 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-06-03 15:12:16.468014 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-06-03 15:12:16.468452 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-06-03 15:12:16.468483 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2025-06-03 15:12:16.468975 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-06-03 15:12:16.469239 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2025-06-03 15:12:16.507456 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-06-03 15:12:16.507583 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-06-03 15:12:16.507715 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-06-03 15:12:16.507734 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-06-03 15:12:16.507935 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-06-03 15:12:16.766414 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-06-03 15:12:16.767252 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-06-03 15:12:16.768810 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-06-03 15:12:16.770772 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-06-03 15:12:16.771830 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2025-06-03 15:12:16.772571 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-06-03 15:12:16.773530 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-06-03 15:12:16.774852 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:12:16.776528 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-06-03 15:12:16.777979 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-06-03 15:12:16.778733 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-06-03 15:12:16.780432 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-06-03 15:12:16.782046 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-06-03 15:12:16.783160 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:12:16.784378 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2025-06-03 15:12:16.784934 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-06-03 15:12:16.785958 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-06-03 15:12:16.786895 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-06-03 15:12:16.787673 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-06-03 15:12:16.788410 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-06-03 15:12:16.788986 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2025-06-03 15:12:16.789960 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:12:16.790365 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:12:16.791085 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-06-03 15:12:16.791762 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-06-03 15:12:16.792392 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-06-03 15:12:16.792826 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-03 15:12:16.793345 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-06-03 15:12:16.793729 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-06-03 15:12:16.794107 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-03 15:12:16.794793 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-06-03 15:12:16.794880 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-06-03 15:12:16.795220 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-03 15:12:16.795627 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:12:16.795796 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-06-03 15:12:16.796011 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-06-03 15:12:16.796342 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-06-03 15:12:16.796592 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-06-03 15:12:16.796740 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-06-03 15:12:16.798852 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-06-03 15:12:16.798886 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:12:16.802834 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:12:16.802860 | orchestrator | 2025-06-03 15:12:16.802874 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2025-06-03 15:12:16.802887 | orchestrator | 2025-06-03 15:12:16.802899 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2025-06-03 15:12:16.802911 | orchestrator | Tuesday 03 June 2025 15:12:16 +0000 (0:00:00.421) 0:00:04.060 ********** 2025-06-03 15:12:17.909772 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:12:17.909863 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:12:17.909920 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:12:17.910228 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:12:17.911979 | orchestrator | ok: [testbed-manager] 2025-06-03 15:12:17.912483 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:12:17.912848 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:12:17.913425 | orchestrator | 2025-06-03 15:12:17.914210 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2025-06-03 15:12:17.915910 | orchestrator | Tuesday 03 June 2025 15:12:17 +0000 (0:00:01.143) 0:00:05.204 ********** 2025-06-03 15:12:19.130581 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:12:19.134354 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:12:19.134423 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:12:19.134437 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:12:19.134503 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:12:19.134925 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:12:19.135786 | orchestrator | ok: [testbed-manager] 2025-06-03 15:12:19.136091 | orchestrator | 2025-06-03 15:12:19.136946 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2025-06-03 15:12:19.137677 | orchestrator | Tuesday 03 June 2025 15:12:19 +0000 (0:00:01.218) 0:00:06.422 ********** 2025-06-03 15:12:19.421456 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-03 15:12:19.422200 | orchestrator | 2025-06-03 15:12:19.423373 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2025-06-03 15:12:19.426242 | orchestrator | Tuesday 03 June 2025 15:12:19 +0000 (0:00:00.291) 0:00:06.714 ********** 2025-06-03 15:12:21.624260 | orchestrator | changed: [testbed-manager] 2025-06-03 15:12:21.624471 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:12:21.624940 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:12:21.625784 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:12:21.626178 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:12:21.626508 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:12:21.627183 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:12:21.627986 | orchestrator | 2025-06-03 15:12:21.628404 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2025-06-03 15:12:21.629022 | orchestrator | Tuesday 03 June 2025 15:12:21 +0000 (0:00:02.202) 0:00:08.916 ********** 2025-06-03 15:12:21.692806 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:12:21.918786 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-03 15:12:21.918889 | orchestrator | 2025-06-03 15:12:21.918968 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2025-06-03 15:12:21.918985 | orchestrator | Tuesday 03 June 2025 15:12:21 +0000 (0:00:00.296) 0:00:09.212 ********** 2025-06-03 15:12:22.969992 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:12:22.970289 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:12:22.970823 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:12:22.971882 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:12:22.972893 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:12:22.973593 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:12:22.977922 | orchestrator | 2025-06-03 15:12:22.977960 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2025-06-03 15:12:22.980085 | orchestrator | Tuesday 03 June 2025 15:12:22 +0000 (0:00:01.049) 0:00:10.262 ********** 2025-06-03 15:12:23.048417 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:12:23.538714 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:12:23.539209 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:12:23.539689 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:12:23.540675 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:12:23.542680 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:12:23.542777 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:12:23.542792 | orchestrator | 2025-06-03 15:12:23.542877 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2025-06-03 15:12:23.542894 | orchestrator | Tuesday 03 June 2025 15:12:23 +0000 (0:00:00.569) 0:00:10.832 ********** 2025-06-03 15:12:23.634368 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:12:23.660217 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:12:23.692022 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:12:23.997140 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:12:23.998486 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:12:24.000377 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:12:24.001013 | orchestrator | ok: [testbed-manager] 2025-06-03 15:12:24.005510 | orchestrator | 2025-06-03 15:12:24.005994 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-06-03 15:12:24.006829 | orchestrator | Tuesday 03 June 2025 15:12:23 +0000 (0:00:00.456) 0:00:11.288 ********** 2025-06-03 15:12:24.079045 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:12:24.111514 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:12:24.126592 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:12:24.154185 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:12:24.206231 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:12:24.207687 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:12:24.210592 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:12:24.211798 | orchestrator | 2025-06-03 15:12:24.213559 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-06-03 15:12:24.214718 | orchestrator | Tuesday 03 June 2025 15:12:24 +0000 (0:00:00.211) 0:00:11.500 ********** 2025-06-03 15:12:24.494472 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-03 15:12:24.497522 | orchestrator | 2025-06-03 15:12:24.497557 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-06-03 15:12:24.497571 | orchestrator | Tuesday 03 June 2025 15:12:24 +0000 (0:00:00.287) 0:00:11.787 ********** 2025-06-03 15:12:24.804523 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-03 15:12:24.809237 | orchestrator | 2025-06-03 15:12:24.809299 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-06-03 15:12:24.809343 | orchestrator | Tuesday 03 June 2025 15:12:24 +0000 (0:00:00.309) 0:00:12.097 ********** 2025-06-03 15:12:26.028720 | orchestrator | ok: [testbed-manager] 2025-06-03 15:12:26.029560 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:12:26.030510 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:12:26.031184 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:12:26.031834 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:12:26.033894 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:12:26.034547 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:12:26.034911 | orchestrator | 2025-06-03 15:12:26.035964 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-06-03 15:12:26.036745 | orchestrator | Tuesday 03 June 2025 15:12:26 +0000 (0:00:01.223) 0:00:13.320 ********** 2025-06-03 15:12:26.102863 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:12:26.125574 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:12:26.149993 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:12:26.177110 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:12:26.226212 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:12:26.226371 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:12:26.226746 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:12:26.227188 | orchestrator | 2025-06-03 15:12:26.227690 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-06-03 15:12:26.228161 | orchestrator | Tuesday 03 June 2025 15:12:26 +0000 (0:00:00.199) 0:00:13.520 ********** 2025-06-03 15:12:26.740691 | orchestrator | ok: [testbed-manager] 2025-06-03 15:12:26.741007 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:12:26.741832 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:12:26.744576 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:12:26.745400 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:12:26.746013 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:12:26.746669 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:12:26.747375 | orchestrator | 2025-06-03 15:12:26.748071 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-06-03 15:12:26.748601 | orchestrator | Tuesday 03 June 2025 15:12:26 +0000 (0:00:00.512) 0:00:14.032 ********** 2025-06-03 15:12:26.816272 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:12:26.842427 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:12:26.869916 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:12:26.894484 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:12:26.971625 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:12:26.972046 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:12:26.972858 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:12:26.974109 | orchestrator | 2025-06-03 15:12:26.974513 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-06-03 15:12:26.974775 | orchestrator | Tuesday 03 June 2025 15:12:26 +0000 (0:00:00.232) 0:00:14.265 ********** 2025-06-03 15:12:27.507786 | orchestrator | ok: [testbed-manager] 2025-06-03 15:12:27.508974 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:12:27.509875 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:12:27.510682 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:12:27.511120 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:12:27.511965 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:12:27.513297 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:12:27.513471 | orchestrator | 2025-06-03 15:12:27.514255 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-06-03 15:12:27.514903 | orchestrator | Tuesday 03 June 2025 15:12:27 +0000 (0:00:00.535) 0:00:14.800 ********** 2025-06-03 15:12:28.585055 | orchestrator | ok: [testbed-manager] 2025-06-03 15:12:28.585827 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:12:28.586769 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:12:28.587387 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:12:28.588233 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:12:28.588896 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:12:28.590175 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:12:28.591027 | orchestrator | 2025-06-03 15:12:28.591590 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-06-03 15:12:28.593566 | orchestrator | Tuesday 03 June 2025 15:12:28 +0000 (0:00:01.077) 0:00:15.877 ********** 2025-06-03 15:12:29.674658 | orchestrator | ok: [testbed-manager] 2025-06-03 15:12:29.674827 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:12:29.675304 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:12:29.676152 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:12:29.676366 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:12:29.677368 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:12:29.678778 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:12:29.679826 | orchestrator | 2025-06-03 15:12:29.680707 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-06-03 15:12:29.681525 | orchestrator | Tuesday 03 June 2025 15:12:29 +0000 (0:00:01.089) 0:00:16.967 ********** 2025-06-03 15:12:30.076274 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-03 15:12:30.077065 | orchestrator | 2025-06-03 15:12:30.077810 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-06-03 15:12:30.079074 | orchestrator | Tuesday 03 June 2025 15:12:30 +0000 (0:00:00.399) 0:00:17.367 ********** 2025-06-03 15:12:30.149201 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:12:31.296973 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:12:31.297358 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:12:31.297874 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:12:31.298418 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:12:31.300385 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:12:31.302257 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:12:31.302274 | orchestrator | 2025-06-03 15:12:31.302492 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-06-03 15:12:31.303092 | orchestrator | Tuesday 03 June 2025 15:12:31 +0000 (0:00:01.221) 0:00:18.588 ********** 2025-06-03 15:12:31.375170 | orchestrator | ok: [testbed-manager] 2025-06-03 15:12:31.428832 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:12:31.453288 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:12:31.507785 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:12:31.508007 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:12:31.508725 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:12:31.509153 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:12:31.509526 | orchestrator | 2025-06-03 15:12:31.510374 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-06-03 15:12:31.510402 | orchestrator | Tuesday 03 June 2025 15:12:31 +0000 (0:00:00.212) 0:00:18.801 ********** 2025-06-03 15:12:31.616207 | orchestrator | ok: [testbed-manager] 2025-06-03 15:12:31.643059 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:12:31.667930 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:12:31.730515 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:12:31.732585 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:12:31.733918 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:12:31.735494 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:12:31.736398 | orchestrator | 2025-06-03 15:12:31.737257 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-06-03 15:12:31.737999 | orchestrator | Tuesday 03 June 2025 15:12:31 +0000 (0:00:00.222) 0:00:19.023 ********** 2025-06-03 15:12:31.805629 | orchestrator | ok: [testbed-manager] 2025-06-03 15:12:31.832109 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:12:31.859242 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:12:31.884413 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:12:31.940430 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:12:31.941589 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:12:31.942115 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:12:31.943124 | orchestrator | 2025-06-03 15:12:31.943686 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-06-03 15:12:31.945252 | orchestrator | Tuesday 03 June 2025 15:12:31 +0000 (0:00:00.209) 0:00:19.233 ********** 2025-06-03 15:12:32.221130 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-03 15:12:32.221464 | orchestrator | 2025-06-03 15:12:32.222977 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-06-03 15:12:32.223976 | orchestrator | Tuesday 03 June 2025 15:12:32 +0000 (0:00:00.281) 0:00:19.514 ********** 2025-06-03 15:12:32.727178 | orchestrator | ok: [testbed-manager] 2025-06-03 15:12:32.727399 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:12:32.728217 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:12:32.728584 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:12:32.728982 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:12:32.730590 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:12:32.730623 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:12:32.730635 | orchestrator | 2025-06-03 15:12:32.730687 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-06-03 15:12:32.731112 | orchestrator | Tuesday 03 June 2025 15:12:32 +0000 (0:00:00.506) 0:00:20.020 ********** 2025-06-03 15:12:32.827953 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:12:32.850223 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:12:32.874757 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:12:32.945540 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:12:32.945670 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:12:32.945741 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:12:32.946182 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:12:32.948470 | orchestrator | 2025-06-03 15:12:32.949104 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-06-03 15:12:32.949619 | orchestrator | Tuesday 03 June 2025 15:12:32 +0000 (0:00:00.218) 0:00:20.239 ********** 2025-06-03 15:12:33.978863 | orchestrator | ok: [testbed-manager] 2025-06-03 15:12:33.978973 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:12:33.981085 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:12:33.981183 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:12:33.981485 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:12:33.983680 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:12:33.984255 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:12:33.985025 | orchestrator | 2025-06-03 15:12:33.985637 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-06-03 15:12:33.986202 | orchestrator | Tuesday 03 June 2025 15:12:33 +0000 (0:00:01.030) 0:00:21.269 ********** 2025-06-03 15:12:34.529530 | orchestrator | ok: [testbed-manager] 2025-06-03 15:12:34.530105 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:12:34.533206 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:12:34.533228 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:12:34.533238 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:12:34.533247 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:12:34.533660 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:12:34.534443 | orchestrator | 2025-06-03 15:12:34.535066 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-06-03 15:12:34.535579 | orchestrator | Tuesday 03 June 2025 15:12:34 +0000 (0:00:00.552) 0:00:21.822 ********** 2025-06-03 15:12:35.585801 | orchestrator | ok: [testbed-manager] 2025-06-03 15:12:35.585976 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:12:35.586784 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:12:35.587801 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:12:35.588503 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:12:35.589057 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:12:35.589598 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:12:35.590586 | orchestrator | 2025-06-03 15:12:35.590723 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-06-03 15:12:35.591362 | orchestrator | Tuesday 03 June 2025 15:12:35 +0000 (0:00:01.053) 0:00:22.875 ********** 2025-06-03 15:12:49.034671 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:12:49.034797 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:12:49.039422 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:12:49.040458 | orchestrator | changed: [testbed-manager] 2025-06-03 15:12:49.041191 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:12:49.044676 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:12:49.045032 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:12:49.045869 | orchestrator | 2025-06-03 15:12:49.046540 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2025-06-03 15:12:49.047030 | orchestrator | Tuesday 03 June 2025 15:12:49 +0000 (0:00:13.443) 0:00:36.319 ********** 2025-06-03 15:12:49.115010 | orchestrator | ok: [testbed-manager] 2025-06-03 15:12:49.151360 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:12:49.181651 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:12:49.203473 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:12:49.276671 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:12:49.276902 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:12:49.277329 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:12:49.277777 | orchestrator | 2025-06-03 15:12:49.278075 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2025-06-03 15:12:49.278578 | orchestrator | Tuesday 03 June 2025 15:12:49 +0000 (0:00:00.250) 0:00:36.570 ********** 2025-06-03 15:12:49.362518 | orchestrator | ok: [testbed-manager] 2025-06-03 15:12:49.396855 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:12:49.421049 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:12:49.450253 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:12:49.507113 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:12:49.507972 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:12:49.509096 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:12:49.510086 | orchestrator | 2025-06-03 15:12:49.511525 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2025-06-03 15:12:49.512107 | orchestrator | Tuesday 03 June 2025 15:12:49 +0000 (0:00:00.230) 0:00:36.800 ********** 2025-06-03 15:12:49.586250 | orchestrator | ok: [testbed-manager] 2025-06-03 15:12:49.622501 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:12:49.657678 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:12:49.680760 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:12:49.749560 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:12:49.750262 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:12:49.750970 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:12:49.751622 | orchestrator | 2025-06-03 15:12:49.752167 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2025-06-03 15:12:49.752968 | orchestrator | Tuesday 03 June 2025 15:12:49 +0000 (0:00:00.242) 0:00:37.043 ********** 2025-06-03 15:12:50.034146 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-03 15:12:50.035103 | orchestrator | 2025-06-03 15:12:50.039416 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2025-06-03 15:12:50.039452 | orchestrator | Tuesday 03 June 2025 15:12:50 +0000 (0:00:00.284) 0:00:37.328 ********** 2025-06-03 15:12:51.850504 | orchestrator | ok: [testbed-manager] 2025-06-03 15:12:51.850691 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:12:51.851925 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:12:51.853243 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:12:51.853686 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:12:51.854515 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:12:51.855575 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:12:51.855864 | orchestrator | 2025-06-03 15:12:51.856643 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2025-06-03 15:12:51.857689 | orchestrator | Tuesday 03 June 2025 15:12:51 +0000 (0:00:01.812) 0:00:39.140 ********** 2025-06-03 15:12:52.944642 | orchestrator | changed: [testbed-manager] 2025-06-03 15:12:52.947293 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:12:52.947368 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:12:52.947849 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:12:52.948616 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:12:52.949467 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:12:52.950192 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:12:52.951055 | orchestrator | 2025-06-03 15:12:52.951651 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2025-06-03 15:12:52.952423 | orchestrator | Tuesday 03 June 2025 15:12:52 +0000 (0:00:01.096) 0:00:40.236 ********** 2025-06-03 15:12:53.771718 | orchestrator | ok: [testbed-manager] 2025-06-03 15:12:53.773909 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:12:53.773949 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:12:53.773960 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:12:53.775036 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:12:53.776001 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:12:53.776875 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:12:53.777785 | orchestrator | 2025-06-03 15:12:53.778239 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2025-06-03 15:12:53.778803 | orchestrator | Tuesday 03 June 2025 15:12:53 +0000 (0:00:00.825) 0:00:41.062 ********** 2025-06-03 15:12:54.052236 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-03 15:12:54.052466 | orchestrator | 2025-06-03 15:12:54.053266 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2025-06-03 15:12:54.053743 | orchestrator | Tuesday 03 June 2025 15:12:54 +0000 (0:00:00.283) 0:00:41.345 ********** 2025-06-03 15:12:55.116811 | orchestrator | changed: [testbed-manager] 2025-06-03 15:12:55.116931 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:12:55.117027 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:12:55.118136 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:12:55.119191 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:12:55.120378 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:12:55.121451 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:12:55.122682 | orchestrator | 2025-06-03 15:12:55.123412 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2025-06-03 15:12:55.123923 | orchestrator | Tuesday 03 June 2025 15:12:55 +0000 (0:00:01.060) 0:00:42.405 ********** 2025-06-03 15:12:55.196752 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:12:55.215377 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:12:55.270379 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:12:55.420055 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:12:55.420181 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:12:55.420583 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:12:55.420615 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:12:55.420944 | orchestrator | 2025-06-03 15:12:55.423582 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2025-06-03 15:12:55.423857 | orchestrator | Tuesday 03 June 2025 15:12:55 +0000 (0:00:00.308) 0:00:42.713 ********** 2025-06-03 15:13:07.535992 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:13:07.536391 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:13:07.536421 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:13:07.536466 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:13:07.536480 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:13:07.536492 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:13:07.536503 | orchestrator | changed: [testbed-manager] 2025-06-03 15:13:07.536523 | orchestrator | 2025-06-03 15:13:07.536801 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2025-06-03 15:13:07.537030 | orchestrator | Tuesday 03 June 2025 15:13:07 +0000 (0:00:12.112) 0:00:54.826 ********** 2025-06-03 15:13:08.654514 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:13:08.655002 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:13:08.657630 | orchestrator | ok: [testbed-manager] 2025-06-03 15:13:08.657679 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:13:08.657692 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:13:08.658161 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:13:08.658965 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:13:08.659582 | orchestrator | 2025-06-03 15:13:08.660095 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2025-06-03 15:13:08.660672 | orchestrator | Tuesday 03 June 2025 15:13:08 +0000 (0:00:01.119) 0:00:55.945 ********** 2025-06-03 15:13:09.545740 | orchestrator | ok: [testbed-manager] 2025-06-03 15:13:09.546747 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:13:09.547802 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:13:09.548946 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:13:09.549193 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:13:09.550512 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:13:09.550623 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:13:09.551110 | orchestrator | 2025-06-03 15:13:09.551883 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2025-06-03 15:13:09.552140 | orchestrator | Tuesday 03 June 2025 15:13:09 +0000 (0:00:00.892) 0:00:56.838 ********** 2025-06-03 15:13:09.609076 | orchestrator | ok: [testbed-manager] 2025-06-03 15:13:09.675497 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:13:09.700880 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:13:09.732809 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:13:09.792083 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:13:09.792687 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:13:09.792718 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:13:09.792980 | orchestrator | 2025-06-03 15:13:09.793414 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2025-06-03 15:13:09.793918 | orchestrator | Tuesday 03 June 2025 15:13:09 +0000 (0:00:00.248) 0:00:57.086 ********** 2025-06-03 15:13:09.866422 | orchestrator | ok: [testbed-manager] 2025-06-03 15:13:09.892862 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:13:09.918267 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:13:09.938944 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:13:09.997001 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:13:09.997098 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:13:09.997790 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:13:09.999157 | orchestrator | 2025-06-03 15:13:09.999756 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2025-06-03 15:13:10.001162 | orchestrator | Tuesday 03 June 2025 15:13:09 +0000 (0:00:00.203) 0:00:57.290 ********** 2025-06-03 15:13:10.290942 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-03 15:13:10.291116 | orchestrator | 2025-06-03 15:13:10.291465 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2025-06-03 15:13:10.292584 | orchestrator | Tuesday 03 June 2025 15:13:10 +0000 (0:00:00.293) 0:00:57.583 ********** 2025-06-03 15:13:11.966191 | orchestrator | ok: [testbed-manager] 2025-06-03 15:13:11.966605 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:13:11.967399 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:13:11.967940 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:13:11.968533 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:13:11.969326 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:13:11.970084 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:13:11.970532 | orchestrator | 2025-06-03 15:13:11.971170 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2025-06-03 15:13:11.971561 | orchestrator | Tuesday 03 June 2025 15:13:11 +0000 (0:00:01.674) 0:00:59.257 ********** 2025-06-03 15:13:12.558819 | orchestrator | changed: [testbed-manager] 2025-06-03 15:13:12.558944 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:13:12.558955 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:13:12.559017 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:13:12.559544 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:13:12.560962 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:13:12.561178 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:13:12.561934 | orchestrator | 2025-06-03 15:13:12.562405 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2025-06-03 15:13:12.563352 | orchestrator | Tuesday 03 June 2025 15:13:12 +0000 (0:00:00.593) 0:00:59.851 ********** 2025-06-03 15:13:12.635064 | orchestrator | ok: [testbed-manager] 2025-06-03 15:13:12.664969 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:13:12.692993 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:13:12.729171 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:13:12.804051 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:13:12.804301 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:13:12.804845 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:13:12.805289 | orchestrator | 2025-06-03 15:13:12.805915 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2025-06-03 15:13:12.806450 | orchestrator | Tuesday 03 June 2025 15:13:12 +0000 (0:00:00.246) 0:01:00.098 ********** 2025-06-03 15:13:13.951030 | orchestrator | ok: [testbed-manager] 2025-06-03 15:13:13.952659 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:13:13.953651 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:13:13.954163 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:13:13.954575 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:13:13.955028 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:13:13.955665 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:13:13.956517 | orchestrator | 2025-06-03 15:13:13.957483 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2025-06-03 15:13:13.957603 | orchestrator | Tuesday 03 June 2025 15:13:13 +0000 (0:00:01.145) 0:01:01.243 ********** 2025-06-03 15:13:15.521001 | orchestrator | changed: [testbed-manager] 2025-06-03 15:13:15.521213 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:13:15.521661 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:13:15.525702 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:13:15.527900 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:13:15.528453 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:13:15.529089 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:13:15.531025 | orchestrator | 2025-06-03 15:13:15.532905 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2025-06-03 15:13:15.534058 | orchestrator | Tuesday 03 June 2025 15:13:15 +0000 (0:00:01.569) 0:01:02.812 ********** 2025-06-03 15:13:17.820653 | orchestrator | ok: [testbed-manager] 2025-06-03 15:13:17.820758 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:13:17.820773 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:13:17.820783 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:13:17.820793 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:13:17.820803 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:13:17.822572 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:13:17.823769 | orchestrator | 2025-06-03 15:13:17.823857 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2025-06-03 15:13:17.824504 | orchestrator | Tuesday 03 June 2025 15:13:17 +0000 (0:00:02.296) 0:01:05.108 ********** 2025-06-03 15:13:52.875762 | orchestrator | ok: [testbed-manager] 2025-06-03 15:13:52.875889 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:13:52.876461 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:13:52.877877 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:13:52.879006 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:13:52.882446 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:13:52.883201 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:13:52.883910 | orchestrator | 2025-06-03 15:13:52.885065 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2025-06-03 15:13:52.885774 | orchestrator | Tuesday 03 June 2025 15:13:52 +0000 (0:00:35.057) 0:01:40.166 ********** 2025-06-03 15:15:09.330598 | orchestrator | changed: [testbed-manager] 2025-06-03 15:15:09.330715 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:15:09.330729 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:15:09.330801 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:15:09.331345 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:15:09.332271 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:15:09.333422 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:15:09.334480 | orchestrator | 2025-06-03 15:15:09.334788 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2025-06-03 15:15:09.335954 | orchestrator | Tuesday 03 June 2025 15:15:09 +0000 (0:01:16.453) 0:02:56.620 ********** 2025-06-03 15:15:11.025664 | orchestrator | ok: [testbed-manager] 2025-06-03 15:15:11.025783 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:15:11.025937 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:15:11.026152 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:15:11.026806 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:15:11.027197 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:15:11.029115 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:15:11.029140 | orchestrator | 2025-06-03 15:15:11.029152 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2025-06-03 15:15:11.029218 | orchestrator | Tuesday 03 June 2025 15:15:11 +0000 (0:00:01.696) 0:02:58.316 ********** 2025-06-03 15:15:22.596634 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:15:22.596759 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:15:22.596774 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:15:22.596786 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:15:22.596857 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:15:22.599702 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:15:22.600203 | orchestrator | changed: [testbed-manager] 2025-06-03 15:15:22.601251 | orchestrator | 2025-06-03 15:15:22.601991 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2025-06-03 15:15:22.602986 | orchestrator | Tuesday 03 June 2025 15:15:22 +0000 (0:00:11.567) 0:03:09.884 ********** 2025-06-03 15:15:22.961638 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2025-06-03 15:15:22.965237 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2025-06-03 15:15:22.965285 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2025-06-03 15:15:22.965328 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2025-06-03 15:15:22.965388 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2025-06-03 15:15:22.965759 | orchestrator | 2025-06-03 15:15:22.966993 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2025-06-03 15:15:22.967501 | orchestrator | Tuesday 03 June 2025 15:15:22 +0000 (0:00:00.370) 0:03:10.254 ********** 2025-06-03 15:15:22.994158 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-06-03 15:15:23.049584 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:15:23.124055 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-06-03 15:15:23.124163 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-06-03 15:15:23.589703 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:15:23.589814 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:15:23.589889 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-06-03 15:15:23.590448 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:15:23.590888 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-06-03 15:15:23.591782 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-06-03 15:15:23.592105 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-06-03 15:15:23.593653 | orchestrator | 2025-06-03 15:15:23.595221 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2025-06-03 15:15:23.596521 | orchestrator | Tuesday 03 June 2025 15:15:23 +0000 (0:00:00.627) 0:03:10.882 ********** 2025-06-03 15:15:23.659922 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-06-03 15:15:23.660051 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-06-03 15:15:23.660433 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-06-03 15:15:23.661173 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-06-03 15:15:23.661699 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-06-03 15:15:23.663070 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-06-03 15:15:23.663104 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-06-03 15:15:23.663547 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-06-03 15:15:23.664148 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-06-03 15:15:23.664808 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-06-03 15:15:23.680617 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:15:23.755240 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-06-03 15:15:23.755449 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-06-03 15:15:23.755878 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-06-03 15:15:23.756567 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-06-03 15:15:23.756916 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-06-03 15:15:28.269672 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-06-03 15:15:28.271043 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-06-03 15:15:28.272860 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-06-03 15:15:28.273556 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-06-03 15:15:28.273999 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-06-03 15:15:28.276038 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-06-03 15:15:28.276376 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-06-03 15:15:28.277476 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-06-03 15:15:28.278159 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-06-03 15:15:28.279532 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:15:28.280652 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-06-03 15:15:28.281249 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-06-03 15:15:28.282492 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-06-03 15:15:28.282892 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-06-03 15:15:28.283497 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-06-03 15:15:28.284036 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-06-03 15:15:28.284818 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-06-03 15:15:28.285449 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-06-03 15:15:28.286061 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-06-03 15:15:28.286287 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-06-03 15:15:28.286872 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:15:28.287618 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-06-03 15:15:28.287971 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-06-03 15:15:28.288590 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-06-03 15:15:28.289224 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-06-03 15:15:28.289548 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-06-03 15:15:28.289962 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-06-03 15:15:28.291108 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:15:28.291456 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-06-03 15:15:28.291861 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-06-03 15:15:28.292481 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-06-03 15:15:28.293007 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-06-03 15:15:28.293908 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-06-03 15:15:28.294148 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-06-03 15:15:28.295904 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-06-03 15:15:28.297795 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-06-03 15:15:28.299380 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-06-03 15:15:28.303543 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-06-03 15:15:28.307010 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-06-03 15:15:28.307050 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-06-03 15:15:28.307069 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-06-03 15:15:28.307173 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-06-03 15:15:28.308110 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-06-03 15:15:28.309600 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-06-03 15:15:28.310471 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-06-03 15:15:28.311542 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-06-03 15:15:28.312800 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-06-03 15:15:28.313610 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-06-03 15:15:28.314494 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-06-03 15:15:28.315155 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-06-03 15:15:28.315845 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-06-03 15:15:28.317591 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-06-03 15:15:28.317899 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-06-03 15:15:28.318794 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-06-03 15:15:28.319504 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-06-03 15:15:28.320222 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-06-03 15:15:28.320799 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-06-03 15:15:28.321634 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-06-03 15:15:28.322278 | orchestrator | 2025-06-03 15:15:28.322893 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2025-06-03 15:15:28.323793 | orchestrator | Tuesday 03 June 2025 15:15:28 +0000 (0:00:04.678) 0:03:15.561 ********** 2025-06-03 15:15:28.883104 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-06-03 15:15:28.885822 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-06-03 15:15:28.893277 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-06-03 15:15:28.893385 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-06-03 15:15:28.893399 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-06-03 15:15:28.893436 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-06-03 15:15:28.893450 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-06-03 15:15:28.893462 | orchestrator | 2025-06-03 15:15:28.893537 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2025-06-03 15:15:28.893552 | orchestrator | Tuesday 03 June 2025 15:15:28 +0000 (0:00:00.613) 0:03:16.174 ********** 2025-06-03 15:15:28.947164 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-06-03 15:15:28.978540 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-06-03 15:15:28.979415 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:15:29.026778 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-06-03 15:15:29.026894 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:15:29.029356 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-06-03 15:15:29.051399 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:15:29.079463 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:15:29.499694 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-06-03 15:15:29.500762 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-06-03 15:15:29.500930 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-06-03 15:15:29.502106 | orchestrator | 2025-06-03 15:15:29.503134 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2025-06-03 15:15:29.503998 | orchestrator | Tuesday 03 June 2025 15:15:29 +0000 (0:00:00.616) 0:03:16.791 ********** 2025-06-03 15:15:29.556657 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-06-03 15:15:29.589659 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:15:29.590120 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-06-03 15:15:29.617672 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:15:29.618201 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-06-03 15:15:29.647463 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:15:29.647841 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-06-03 15:15:29.674170 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:15:30.141515 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-06-03 15:15:30.141897 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-06-03 15:15:30.142875 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-06-03 15:15:30.143715 | orchestrator | 2025-06-03 15:15:30.144816 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2025-06-03 15:15:30.146419 | orchestrator | Tuesday 03 June 2025 15:15:30 +0000 (0:00:00.642) 0:03:17.434 ********** 2025-06-03 15:15:30.220836 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:15:30.242189 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:15:30.273149 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:15:30.293056 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:15:30.410742 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:15:30.413593 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:15:30.413659 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:15:30.413678 | orchestrator | 2025-06-03 15:15:30.413697 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2025-06-03 15:15:30.413979 | orchestrator | Tuesday 03 June 2025 15:15:30 +0000 (0:00:00.268) 0:03:17.702 ********** 2025-06-03 15:15:35.937752 | orchestrator | ok: [testbed-manager] 2025-06-03 15:15:35.938672 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:15:35.940194 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:15:35.940727 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:15:35.941889 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:15:35.942615 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:15:35.943479 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:15:35.944275 | orchestrator | 2025-06-03 15:15:35.945080 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2025-06-03 15:15:35.946082 | orchestrator | Tuesday 03 June 2025 15:15:35 +0000 (0:00:05.527) 0:03:23.230 ********** 2025-06-03 15:15:35.978206 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2025-06-03 15:15:36.012084 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2025-06-03 15:15:36.049060 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:15:36.049633 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2025-06-03 15:15:36.081029 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:15:36.118816 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2025-06-03 15:15:36.119980 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:15:36.121001 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2025-06-03 15:15:36.149250 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:15:36.212159 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:15:36.213081 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2025-06-03 15:15:36.214870 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:15:36.215736 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2025-06-03 15:15:36.216263 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:15:36.217019 | orchestrator | 2025-06-03 15:15:36.217693 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2025-06-03 15:15:36.218253 | orchestrator | Tuesday 03 June 2025 15:15:36 +0000 (0:00:00.275) 0:03:23.506 ********** 2025-06-03 15:15:37.285256 | orchestrator | ok: [testbed-manager] => (item=cron) 2025-06-03 15:15:37.285887 | orchestrator | ok: [testbed-node-0] => (item=cron) 2025-06-03 15:15:37.289396 | orchestrator | ok: [testbed-node-3] => (item=cron) 2025-06-03 15:15:37.289431 | orchestrator | ok: [testbed-node-1] => (item=cron) 2025-06-03 15:15:37.289443 | orchestrator | ok: [testbed-node-4] => (item=cron) 2025-06-03 15:15:37.289455 | orchestrator | ok: [testbed-node-2] => (item=cron) 2025-06-03 15:15:37.289466 | orchestrator | ok: [testbed-node-5] => (item=cron) 2025-06-03 15:15:37.289520 | orchestrator | 2025-06-03 15:15:37.290474 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2025-06-03 15:15:37.290886 | orchestrator | Tuesday 03 June 2025 15:15:37 +0000 (0:00:01.071) 0:03:24.577 ********** 2025-06-03 15:15:37.684227 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-03 15:15:37.690267 | orchestrator | 2025-06-03 15:15:37.690367 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2025-06-03 15:15:37.690389 | orchestrator | Tuesday 03 June 2025 15:15:37 +0000 (0:00:00.398) 0:03:24.976 ********** 2025-06-03 15:15:38.956088 | orchestrator | ok: [testbed-manager] 2025-06-03 15:15:38.957577 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:15:38.958153 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:15:38.959475 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:15:38.960450 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:15:38.961261 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:15:38.961870 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:15:38.962620 | orchestrator | 2025-06-03 15:15:38.963172 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2025-06-03 15:15:38.964335 | orchestrator | Tuesday 03 June 2025 15:15:38 +0000 (0:00:01.271) 0:03:26.248 ********** 2025-06-03 15:15:39.556110 | orchestrator | ok: [testbed-manager] 2025-06-03 15:15:39.556770 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:15:39.557989 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:15:39.561515 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:15:39.564896 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:15:39.565780 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:15:39.567024 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:15:39.567822 | orchestrator | 2025-06-03 15:15:39.568507 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2025-06-03 15:15:39.568945 | orchestrator | Tuesday 03 June 2025 15:15:39 +0000 (0:00:00.599) 0:03:26.848 ********** 2025-06-03 15:15:40.141370 | orchestrator | changed: [testbed-manager] 2025-06-03 15:15:40.142081 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:15:40.143019 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:15:40.144891 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:15:40.144918 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:15:40.146155 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:15:40.147383 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:15:40.148388 | orchestrator | 2025-06-03 15:15:40.149218 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2025-06-03 15:15:40.149975 | orchestrator | Tuesday 03 June 2025 15:15:40 +0000 (0:00:00.587) 0:03:27.435 ********** 2025-06-03 15:15:40.717634 | orchestrator | ok: [testbed-manager] 2025-06-03 15:15:40.718466 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:15:40.719333 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:15:40.722213 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:15:40.722260 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:15:40.722276 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:15:40.722372 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:15:40.722392 | orchestrator | 2025-06-03 15:15:40.722494 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2025-06-03 15:15:40.722900 | orchestrator | Tuesday 03 June 2025 15:15:40 +0000 (0:00:00.574) 0:03:28.010 ********** 2025-06-03 15:15:41.669714 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1748962337.3219004, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-03 15:15:41.670184 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1748962391.1233792, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-03 15:15:41.671058 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1748962392.7014034, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-03 15:15:41.671819 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1748962389.370525, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-03 15:15:41.672645 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1748962389.6948147, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-03 15:15:41.673686 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1748962397.3139412, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-03 15:15:41.675148 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1748962407.3923118, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-03 15:15:41.676549 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1748962366.963109, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-03 15:15:41.676576 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1748962283.6378222, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-03 15:15:41.677571 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1748962289.2635264, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-03 15:15:41.678433 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1748962288.1443224, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-03 15:15:41.679177 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1748962288.8106928, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-03 15:15:41.680262 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1748962295.6816993, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-03 15:15:41.681112 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1748962298.7364783, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-03 15:15:41.681859 | orchestrator | 2025-06-03 15:15:41.682799 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2025-06-03 15:15:41.683045 | orchestrator | Tuesday 03 June 2025 15:15:41 +0000 (0:00:00.952) 0:03:28.962 ********** 2025-06-03 15:15:42.824373 | orchestrator | changed: [testbed-manager] 2025-06-03 15:15:42.826276 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:15:42.826347 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:15:42.827635 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:15:42.828441 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:15:42.829417 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:15:42.830567 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:15:42.831149 | orchestrator | 2025-06-03 15:15:42.831411 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2025-06-03 15:15:42.832212 | orchestrator | Tuesday 03 June 2025 15:15:42 +0000 (0:00:01.152) 0:03:30.115 ********** 2025-06-03 15:15:43.990704 | orchestrator | changed: [testbed-manager] 2025-06-03 15:15:43.993507 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:15:43.993539 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:15:43.995424 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:15:43.996418 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:15:43.997708 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:15:43.998986 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:15:43.999906 | orchestrator | 2025-06-03 15:15:44.000694 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2025-06-03 15:15:44.001774 | orchestrator | Tuesday 03 June 2025 15:15:43 +0000 (0:00:01.166) 0:03:31.281 ********** 2025-06-03 15:15:45.116449 | orchestrator | changed: [testbed-manager] 2025-06-03 15:15:45.117196 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:15:45.118963 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:15:45.119350 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:15:45.119819 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:15:45.120389 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:15:45.120997 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:15:45.121500 | orchestrator | 2025-06-03 15:15:45.122065 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2025-06-03 15:15:45.122435 | orchestrator | Tuesday 03 June 2025 15:15:45 +0000 (0:00:01.126) 0:03:32.407 ********** 2025-06-03 15:15:45.193868 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:15:45.225749 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:15:45.254795 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:15:45.298796 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:15:45.332712 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:15:45.388332 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:15:45.389571 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:15:45.390116 | orchestrator | 2025-06-03 15:15:45.391801 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2025-06-03 15:15:45.392537 | orchestrator | Tuesday 03 June 2025 15:15:45 +0000 (0:00:00.273) 0:03:32.681 ********** 2025-06-03 15:15:46.107007 | orchestrator | ok: [testbed-manager] 2025-06-03 15:15:46.110662 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:15:46.110707 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:15:46.110720 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:15:46.110731 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:15:46.110742 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:15:46.111429 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:15:46.112027 | orchestrator | 2025-06-03 15:15:46.112716 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2025-06-03 15:15:46.113341 | orchestrator | Tuesday 03 June 2025 15:15:46 +0000 (0:00:00.717) 0:03:33.398 ********** 2025-06-03 15:15:46.476684 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-03 15:15:46.477179 | orchestrator | 2025-06-03 15:15:46.478203 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2025-06-03 15:15:46.484263 | orchestrator | Tuesday 03 June 2025 15:15:46 +0000 (0:00:00.371) 0:03:33.769 ********** 2025-06-03 15:15:54.883212 | orchestrator | ok: [testbed-manager] 2025-06-03 15:15:54.884529 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:15:54.884570 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:15:54.884582 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:15:54.886187 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:15:54.887686 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:15:54.888104 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:15:54.888864 | orchestrator | 2025-06-03 15:15:54.889566 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2025-06-03 15:15:54.890118 | orchestrator | Tuesday 03 June 2025 15:15:54 +0000 (0:00:08.403) 0:03:42.173 ********** 2025-06-03 15:15:56.144515 | orchestrator | ok: [testbed-manager] 2025-06-03 15:15:56.144623 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:15:56.144639 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:15:56.146934 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:15:56.147266 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:15:56.149218 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:15:56.149259 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:15:56.149273 | orchestrator | 2025-06-03 15:15:56.149550 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2025-06-03 15:15:56.150544 | orchestrator | Tuesday 03 June 2025 15:15:56 +0000 (0:00:01.262) 0:03:43.436 ********** 2025-06-03 15:15:57.211372 | orchestrator | ok: [testbed-manager] 2025-06-03 15:15:57.213604 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:15:57.213649 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:15:57.213772 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:15:57.214281 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:15:57.215178 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:15:57.215664 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:15:57.216056 | orchestrator | 2025-06-03 15:15:57.216657 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2025-06-03 15:15:57.217196 | orchestrator | Tuesday 03 June 2025 15:15:57 +0000 (0:00:01.065) 0:03:44.501 ********** 2025-06-03 15:15:57.719184 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-03 15:15:57.719358 | orchestrator | 2025-06-03 15:15:57.720497 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2025-06-03 15:15:57.724603 | orchestrator | Tuesday 03 June 2025 15:15:57 +0000 (0:00:00.510) 0:03:45.011 ********** 2025-06-03 15:16:06.291768 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:16:06.291893 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:16:06.291910 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:16:06.291922 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:16:06.292179 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:16:06.293879 | orchestrator | changed: [testbed-manager] 2025-06-03 15:16:06.294629 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:16:06.296775 | orchestrator | 2025-06-03 15:16:06.297576 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2025-06-03 15:16:06.297918 | orchestrator | Tuesday 03 June 2025 15:16:06 +0000 (0:00:08.568) 0:03:53.580 ********** 2025-06-03 15:16:06.906893 | orchestrator | changed: [testbed-manager] 2025-06-03 15:16:06.907063 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:16:06.908870 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:16:06.910375 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:16:06.911472 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:16:06.912375 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:16:06.913718 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:16:06.914370 | orchestrator | 2025-06-03 15:16:06.915200 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2025-06-03 15:16:06.915988 | orchestrator | Tuesday 03 June 2025 15:16:06 +0000 (0:00:00.617) 0:03:54.197 ********** 2025-06-03 15:16:08.047789 | orchestrator | changed: [testbed-manager] 2025-06-03 15:16:08.047902 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:16:08.047991 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:16:08.048008 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:16:08.051392 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:16:08.051449 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:16:08.051469 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:16:08.051490 | orchestrator | 2025-06-03 15:16:08.051512 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2025-06-03 15:16:08.051572 | orchestrator | Tuesday 03 June 2025 15:16:08 +0000 (0:00:01.140) 0:03:55.338 ********** 2025-06-03 15:16:09.116425 | orchestrator | changed: [testbed-manager] 2025-06-03 15:16:09.119598 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:16:09.119662 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:16:09.119674 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:16:09.119684 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:16:09.119739 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:16:09.120724 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:16:09.121748 | orchestrator | 2025-06-03 15:16:09.122570 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2025-06-03 15:16:09.122713 | orchestrator | Tuesday 03 June 2025 15:16:09 +0000 (0:00:01.069) 0:03:56.407 ********** 2025-06-03 15:16:09.219752 | orchestrator | ok: [testbed-manager] 2025-06-03 15:16:09.255256 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:16:09.306263 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:16:09.342067 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:16:09.419767 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:16:09.420684 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:16:09.424345 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:16:09.424390 | orchestrator | 2025-06-03 15:16:09.425671 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2025-06-03 15:16:09.426836 | orchestrator | Tuesday 03 June 2025 15:16:09 +0000 (0:00:00.305) 0:03:56.713 ********** 2025-06-03 15:16:09.543564 | orchestrator | ok: [testbed-manager] 2025-06-03 15:16:09.579098 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:16:09.630618 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:16:09.666177 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:16:09.740866 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:16:09.741070 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:16:09.741857 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:16:09.742786 | orchestrator | 2025-06-03 15:16:09.743628 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2025-06-03 15:16:09.744816 | orchestrator | Tuesday 03 June 2025 15:16:09 +0000 (0:00:00.320) 0:03:57.033 ********** 2025-06-03 15:16:09.844104 | orchestrator | ok: [testbed-manager] 2025-06-03 15:16:09.885357 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:16:09.914709 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:16:09.948888 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:16:10.026127 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:16:10.026742 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:16:10.027270 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:16:10.027906 | orchestrator | 2025-06-03 15:16:10.028396 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2025-06-03 15:16:10.029168 | orchestrator | Tuesday 03 June 2025 15:16:10 +0000 (0:00:00.286) 0:03:57.320 ********** 2025-06-03 15:16:15.817029 | orchestrator | ok: [testbed-manager] 2025-06-03 15:16:15.817154 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:16:15.817233 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:16:15.817780 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:16:15.818106 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:16:15.818702 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:16:15.819130 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:16:15.819485 | orchestrator | 2025-06-03 15:16:15.819629 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2025-06-03 15:16:15.820421 | orchestrator | Tuesday 03 June 2025 15:16:15 +0000 (0:00:05.790) 0:04:03.110 ********** 2025-06-03 15:16:16.228776 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-03 15:16:16.230740 | orchestrator | 2025-06-03 15:16:16.230806 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2025-06-03 15:16:16.231445 | orchestrator | Tuesday 03 June 2025 15:16:16 +0000 (0:00:00.409) 0:04:03.520 ********** 2025-06-03 15:16:16.304889 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2025-06-03 15:16:16.305532 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2025-06-03 15:16:16.352888 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2025-06-03 15:16:16.353353 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:16:16.353971 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2025-06-03 15:16:16.354275 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2025-06-03 15:16:16.395568 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2025-06-03 15:16:16.396643 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:16:16.397465 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2025-06-03 15:16:16.456126 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:16:16.457013 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2025-06-03 15:16:16.458169 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2025-06-03 15:16:16.459162 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2025-06-03 15:16:16.513790 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:16:16.515836 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2025-06-03 15:16:16.517591 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2025-06-03 15:16:16.614617 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:16:16.615961 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:16:16.617623 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2025-06-03 15:16:16.618786 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2025-06-03 15:16:16.619262 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:16:16.620258 | orchestrator | 2025-06-03 15:16:16.621030 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2025-06-03 15:16:16.621658 | orchestrator | Tuesday 03 June 2025 15:16:16 +0000 (0:00:00.384) 0:04:03.904 ********** 2025-06-03 15:16:17.008505 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-03 15:16:17.008710 | orchestrator | 2025-06-03 15:16:17.009358 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2025-06-03 15:16:17.013753 | orchestrator | Tuesday 03 June 2025 15:16:17 +0000 (0:00:00.395) 0:04:04.300 ********** 2025-06-03 15:16:17.080589 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2025-06-03 15:16:17.119920 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:16:17.120591 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2025-06-03 15:16:17.161751 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:16:17.162325 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2025-06-03 15:16:17.163751 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2025-06-03 15:16:17.203776 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:16:17.203878 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2025-06-03 15:16:17.237844 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:16:17.238923 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2025-06-03 15:16:17.308187 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:16:17.308507 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:16:17.309824 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2025-06-03 15:16:17.310785 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:16:17.311753 | orchestrator | 2025-06-03 15:16:17.313643 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2025-06-03 15:16:17.314248 | orchestrator | Tuesday 03 June 2025 15:16:17 +0000 (0:00:00.301) 0:04:04.601 ********** 2025-06-03 15:16:17.860573 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-03 15:16:17.861105 | orchestrator | 2025-06-03 15:16:17.861812 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2025-06-03 15:16:17.862267 | orchestrator | Tuesday 03 June 2025 15:16:17 +0000 (0:00:00.552) 0:04:05.154 ********** 2025-06-03 15:16:53.514570 | orchestrator | changed: [testbed-manager] 2025-06-03 15:16:53.514715 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:16:53.514804 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:16:53.516468 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:16:53.516565 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:16:53.518085 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:16:53.519709 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:16:53.520843 | orchestrator | 2025-06-03 15:16:53.521426 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2025-06-03 15:16:53.522151 | orchestrator | Tuesday 03 June 2025 15:16:53 +0000 (0:00:35.652) 0:04:40.806 ********** 2025-06-03 15:17:01.271074 | orchestrator | changed: [testbed-manager] 2025-06-03 15:17:01.271400 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:17:01.273019 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:17:01.274535 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:17:01.276238 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:17:01.276584 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:17:01.277758 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:17:01.278532 | orchestrator | 2025-06-03 15:17:01.279718 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2025-06-03 15:17:01.280424 | orchestrator | Tuesday 03 June 2025 15:17:01 +0000 (0:00:07.756) 0:04:48.562 ********** 2025-06-03 15:17:08.493747 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:17:08.496591 | orchestrator | changed: [testbed-manager] 2025-06-03 15:17:08.496656 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:17:08.497334 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:17:08.498370 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:17:08.499888 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:17:08.500530 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:17:08.501433 | orchestrator | 2025-06-03 15:17:08.501969 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2025-06-03 15:17:08.502455 | orchestrator | Tuesday 03 June 2025 15:17:08 +0000 (0:00:07.223) 0:04:55.785 ********** 2025-06-03 15:17:10.334203 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:17:10.334624 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:17:10.336479 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:17:10.336537 | orchestrator | ok: [testbed-manager] 2025-06-03 15:17:10.338091 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:17:10.338124 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:17:10.338456 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:17:10.338997 | orchestrator | 2025-06-03 15:17:10.339639 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2025-06-03 15:17:10.340564 | orchestrator | Tuesday 03 June 2025 15:17:10 +0000 (0:00:01.840) 0:04:57.626 ********** 2025-06-03 15:17:15.933017 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:17:15.933138 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:17:15.935066 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:17:15.937580 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:17:15.938653 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:17:15.939501 | orchestrator | changed: [testbed-manager] 2025-06-03 15:17:15.940891 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:17:15.941885 | orchestrator | 2025-06-03 15:17:15.943035 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2025-06-03 15:17:15.943558 | orchestrator | Tuesday 03 June 2025 15:17:15 +0000 (0:00:05.598) 0:05:03.224 ********** 2025-06-03 15:17:16.348239 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-03 15:17:16.348622 | orchestrator | 2025-06-03 15:17:16.349690 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2025-06-03 15:17:16.349753 | orchestrator | Tuesday 03 June 2025 15:17:16 +0000 (0:00:00.417) 0:05:03.642 ********** 2025-06-03 15:17:17.095132 | orchestrator | changed: [testbed-manager] 2025-06-03 15:17:17.096675 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:17:17.098102 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:17:17.098337 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:17:17.102437 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:17:17.103009 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:17:17.105648 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:17:17.105688 | orchestrator | 2025-06-03 15:17:17.107018 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2025-06-03 15:17:17.107054 | orchestrator | Tuesday 03 June 2025 15:17:17 +0000 (0:00:00.744) 0:05:04.386 ********** 2025-06-03 15:17:18.747862 | orchestrator | ok: [testbed-manager] 2025-06-03 15:17:18.748419 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:17:18.749139 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:17:18.750707 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:17:18.751339 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:17:18.752530 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:17:18.752949 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:17:18.754676 | orchestrator | 2025-06-03 15:17:18.756008 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2025-06-03 15:17:18.757476 | orchestrator | Tuesday 03 June 2025 15:17:18 +0000 (0:00:01.650) 0:05:06.037 ********** 2025-06-03 15:17:19.519513 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:17:19.520464 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:17:19.523806 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:17:19.527506 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:17:19.528136 | orchestrator | changed: [testbed-manager] 2025-06-03 15:17:19.528938 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:17:19.529760 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:17:19.530698 | orchestrator | 2025-06-03 15:17:19.531123 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2025-06-03 15:17:19.531907 | orchestrator | Tuesday 03 June 2025 15:17:19 +0000 (0:00:00.773) 0:05:06.810 ********** 2025-06-03 15:17:19.599076 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:17:19.665988 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:17:19.708783 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:17:19.744314 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:17:19.812216 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:17:19.812367 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:17:19.812453 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:17:19.813465 | orchestrator | 2025-06-03 15:17:19.813534 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2025-06-03 15:17:19.813603 | orchestrator | Tuesday 03 June 2025 15:17:19 +0000 (0:00:00.293) 0:05:07.104 ********** 2025-06-03 15:17:19.895026 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:17:19.927680 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:17:19.959465 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:17:19.990844 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:17:20.022680 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:17:20.193119 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:17:20.193373 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:17:20.194103 | orchestrator | 2025-06-03 15:17:20.195407 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2025-06-03 15:17:20.195511 | orchestrator | Tuesday 03 June 2025 15:17:20 +0000 (0:00:00.381) 0:05:07.486 ********** 2025-06-03 15:17:20.304750 | orchestrator | ok: [testbed-manager] 2025-06-03 15:17:20.343783 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:17:20.380459 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:17:20.416755 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:17:20.494435 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:17:20.494955 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:17:20.495702 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:17:20.496519 | orchestrator | 2025-06-03 15:17:20.496691 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2025-06-03 15:17:20.497723 | orchestrator | Tuesday 03 June 2025 15:17:20 +0000 (0:00:00.301) 0:05:07.787 ********** 2025-06-03 15:17:20.582118 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:17:20.619733 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:17:20.657461 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:17:20.729240 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:17:20.802349 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:17:20.802529 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:17:20.803042 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:17:20.803490 | orchestrator | 2025-06-03 15:17:20.804215 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2025-06-03 15:17:20.805505 | orchestrator | Tuesday 03 June 2025 15:17:20 +0000 (0:00:00.307) 0:05:08.095 ********** 2025-06-03 15:17:20.912961 | orchestrator | ok: [testbed-manager] 2025-06-03 15:17:20.952641 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:17:20.992907 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:17:21.045436 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:17:21.125368 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:17:21.125760 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:17:21.128450 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:17:21.128950 | orchestrator | 2025-06-03 15:17:21.131432 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2025-06-03 15:17:21.132186 | orchestrator | Tuesday 03 June 2025 15:17:21 +0000 (0:00:00.323) 0:05:08.419 ********** 2025-06-03 15:17:21.249972 | orchestrator | ok: [testbed-manager] =>  2025-06-03 15:17:21.250114 | orchestrator |  docker_version: 5:27.5.1 2025-06-03 15:17:21.281764 | orchestrator | ok: [testbed-node-0] =>  2025-06-03 15:17:21.282208 | orchestrator |  docker_version: 5:27.5.1 2025-06-03 15:17:21.317803 | orchestrator | ok: [testbed-node-1] =>  2025-06-03 15:17:21.319217 | orchestrator |  docker_version: 5:27.5.1 2025-06-03 15:17:21.349536 | orchestrator | ok: [testbed-node-2] =>  2025-06-03 15:17:21.350257 | orchestrator |  docker_version: 5:27.5.1 2025-06-03 15:17:21.451725 | orchestrator | ok: [testbed-node-3] =>  2025-06-03 15:17:21.451827 | orchestrator |  docker_version: 5:27.5.1 2025-06-03 15:17:21.452657 | orchestrator | ok: [testbed-node-4] =>  2025-06-03 15:17:21.453226 | orchestrator |  docker_version: 5:27.5.1 2025-06-03 15:17:21.454170 | orchestrator | ok: [testbed-node-5] =>  2025-06-03 15:17:21.456389 | orchestrator |  docker_version: 5:27.5.1 2025-06-03 15:17:21.456416 | orchestrator | 2025-06-03 15:17:21.456431 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2025-06-03 15:17:21.460602 | orchestrator | Tuesday 03 June 2025 15:17:21 +0000 (0:00:00.319) 0:05:08.738 ********** 2025-06-03 15:17:21.579915 | orchestrator | ok: [testbed-manager] =>  2025-06-03 15:17:21.580103 | orchestrator |  docker_cli_version: 5:27.5.1 2025-06-03 15:17:21.631121 | orchestrator | ok: [testbed-node-0] =>  2025-06-03 15:17:21.631418 | orchestrator |  docker_cli_version: 5:27.5.1 2025-06-03 15:17:21.795338 | orchestrator | ok: [testbed-node-1] =>  2025-06-03 15:17:21.795460 | orchestrator |  docker_cli_version: 5:27.5.1 2025-06-03 15:17:21.832163 | orchestrator | ok: [testbed-node-2] =>  2025-06-03 15:17:21.832340 | orchestrator |  docker_cli_version: 5:27.5.1 2025-06-03 15:17:21.908582 | orchestrator | ok: [testbed-node-3] =>  2025-06-03 15:17:21.911048 | orchestrator |  docker_cli_version: 5:27.5.1 2025-06-03 15:17:21.911087 | orchestrator | ok: [testbed-node-4] =>  2025-06-03 15:17:21.911099 | orchestrator |  docker_cli_version: 5:27.5.1 2025-06-03 15:17:21.911110 | orchestrator | ok: [testbed-node-5] =>  2025-06-03 15:17:21.911726 | orchestrator |  docker_cli_version: 5:27.5.1 2025-06-03 15:17:21.912200 | orchestrator | 2025-06-03 15:17:21.913318 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2025-06-03 15:17:21.916120 | orchestrator | Tuesday 03 June 2025 15:17:21 +0000 (0:00:00.463) 0:05:09.201 ********** 2025-06-03 15:17:22.032863 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:17:22.065839 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:17:22.100427 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:17:22.149181 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:17:22.209366 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:17:22.210581 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:17:22.213148 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:17:22.213207 | orchestrator | 2025-06-03 15:17:22.213810 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2025-06-03 15:17:22.214568 | orchestrator | Tuesday 03 June 2025 15:17:22 +0000 (0:00:00.301) 0:05:09.503 ********** 2025-06-03 15:17:22.278656 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:17:22.338953 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:17:22.379610 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:17:22.409705 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:17:22.496960 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:17:22.497118 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:17:22.498167 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:17:22.499369 | orchestrator | 2025-06-03 15:17:22.499560 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2025-06-03 15:17:22.502183 | orchestrator | Tuesday 03 June 2025 15:17:22 +0000 (0:00:00.287) 0:05:09.790 ********** 2025-06-03 15:17:22.887772 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-03 15:17:22.888132 | orchestrator | 2025-06-03 15:17:22.888845 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2025-06-03 15:17:22.889444 | orchestrator | Tuesday 03 June 2025 15:17:22 +0000 (0:00:00.388) 0:05:10.178 ********** 2025-06-03 15:17:23.757117 | orchestrator | ok: [testbed-manager] 2025-06-03 15:17:23.757220 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:17:23.757633 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:17:23.759522 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:17:23.760437 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:17:23.760890 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:17:23.762240 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:17:23.762718 | orchestrator | 2025-06-03 15:17:23.763311 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2025-06-03 15:17:23.763962 | orchestrator | Tuesday 03 June 2025 15:17:23 +0000 (0:00:00.867) 0:05:11.046 ********** 2025-06-03 15:17:26.595734 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:17:26.595873 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:17:26.595991 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:17:26.596036 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:17:26.596148 | orchestrator | ok: [testbed-manager] 2025-06-03 15:17:26.596647 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:17:26.597255 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:17:26.597536 | orchestrator | 2025-06-03 15:17:26.597869 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2025-06-03 15:17:26.598384 | orchestrator | Tuesday 03 June 2025 15:17:26 +0000 (0:00:02.840) 0:05:13.887 ********** 2025-06-03 15:17:26.675682 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2025-06-03 15:17:26.676414 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2025-06-03 15:17:26.682149 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2025-06-03 15:17:26.746610 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:17:26.747687 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2025-06-03 15:17:26.748495 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2025-06-03 15:17:26.750969 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2025-06-03 15:17:26.833896 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2025-06-03 15:17:26.834172 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2025-06-03 15:17:26.836304 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2025-06-03 15:17:26.909396 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:17:26.910178 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2025-06-03 15:17:26.911168 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2025-06-03 15:17:26.912556 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2025-06-03 15:17:27.143045 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:17:27.143816 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2025-06-03 15:17:27.145407 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2025-06-03 15:17:27.146438 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2025-06-03 15:17:27.216044 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:17:27.216592 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2025-06-03 15:17:27.217687 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2025-06-03 15:17:27.218682 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2025-06-03 15:17:27.382416 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:17:27.382728 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:17:27.382968 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2025-06-03 15:17:27.383608 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2025-06-03 15:17:27.384087 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2025-06-03 15:17:27.384609 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:17:27.385123 | orchestrator | 2025-06-03 15:17:27.385384 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2025-06-03 15:17:27.385829 | orchestrator | Tuesday 03 June 2025 15:17:27 +0000 (0:00:00.784) 0:05:14.671 ********** 2025-06-03 15:17:33.248229 | orchestrator | ok: [testbed-manager] 2025-06-03 15:17:33.251428 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:17:33.251563 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:17:33.251586 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:17:33.251765 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:17:33.252879 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:17:33.253571 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:17:33.254205 | orchestrator | 2025-06-03 15:17:33.254993 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2025-06-03 15:17:33.255949 | orchestrator | Tuesday 03 June 2025 15:17:33 +0000 (0:00:05.869) 0:05:20.541 ********** 2025-06-03 15:17:34.257105 | orchestrator | ok: [testbed-manager] 2025-06-03 15:17:34.258373 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:17:34.259035 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:17:34.260095 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:17:34.260628 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:17:34.261543 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:17:34.262310 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:17:34.263623 | orchestrator | 2025-06-03 15:17:34.266149 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2025-06-03 15:17:34.266431 | orchestrator | Tuesday 03 June 2025 15:17:34 +0000 (0:00:01.007) 0:05:21.549 ********** 2025-06-03 15:17:41.345398 | orchestrator | ok: [testbed-manager] 2025-06-03 15:17:41.345521 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:17:41.346124 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:17:41.347420 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:17:41.348728 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:17:41.350159 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:17:41.351005 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:17:41.352669 | orchestrator | 2025-06-03 15:17:41.353539 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2025-06-03 15:17:41.354235 | orchestrator | Tuesday 03 June 2025 15:17:41 +0000 (0:00:07.085) 0:05:28.635 ********** 2025-06-03 15:17:44.491032 | orchestrator | changed: [testbed-manager] 2025-06-03 15:17:44.491331 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:17:44.492939 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:17:44.493769 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:17:44.495864 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:17:44.497011 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:17:44.498502 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:17:44.499729 | orchestrator | 2025-06-03 15:17:44.500711 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2025-06-03 15:17:44.501937 | orchestrator | Tuesday 03 June 2025 15:17:44 +0000 (0:00:03.147) 0:05:31.782 ********** 2025-06-03 15:17:46.089981 | orchestrator | ok: [testbed-manager] 2025-06-03 15:17:46.090443 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:17:46.092821 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:17:46.093120 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:17:46.094433 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:17:46.095405 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:17:46.096211 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:17:46.096392 | orchestrator | 2025-06-03 15:17:46.096991 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2025-06-03 15:17:46.097284 | orchestrator | Tuesday 03 June 2025 15:17:46 +0000 (0:00:01.596) 0:05:33.379 ********** 2025-06-03 15:17:47.579986 | orchestrator | ok: [testbed-manager] 2025-06-03 15:17:47.580066 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:17:47.580767 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:17:47.581430 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:17:47.583419 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:17:47.583472 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:17:47.583571 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:17:47.584037 | orchestrator | 2025-06-03 15:17:47.584569 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2025-06-03 15:17:47.585017 | orchestrator | Tuesday 03 June 2025 15:17:47 +0000 (0:00:01.487) 0:05:34.867 ********** 2025-06-03 15:17:47.793990 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:17:47.867655 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:17:47.931028 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:17:47.996463 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:17:48.230952 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:17:48.231104 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:17:48.231888 | orchestrator | changed: [testbed-manager] 2025-06-03 15:17:48.232665 | orchestrator | 2025-06-03 15:17:48.233596 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2025-06-03 15:17:48.234215 | orchestrator | Tuesday 03 June 2025 15:17:48 +0000 (0:00:00.656) 0:05:35.524 ********** 2025-06-03 15:17:57.695071 | orchestrator | ok: [testbed-manager] 2025-06-03 15:17:57.696413 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:17:57.698695 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:17:57.699088 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:17:57.701171 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:17:57.702157 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:17:57.702926 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:17:57.703647 | orchestrator | 2025-06-03 15:17:57.704050 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2025-06-03 15:17:57.704875 | orchestrator | Tuesday 03 June 2025 15:17:57 +0000 (0:00:09.461) 0:05:44.986 ********** 2025-06-03 15:17:58.600666 | orchestrator | changed: [testbed-manager] 2025-06-03 15:17:58.601679 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:17:58.603322 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:17:58.603614 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:17:58.604348 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:17:58.604857 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:17:58.605773 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:17:58.606181 | orchestrator | 2025-06-03 15:17:58.606734 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2025-06-03 15:17:58.607386 | orchestrator | Tuesday 03 June 2025 15:17:58 +0000 (0:00:00.906) 0:05:45.892 ********** 2025-06-03 15:18:07.356785 | orchestrator | ok: [testbed-manager] 2025-06-03 15:18:07.357022 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:18:07.357966 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:18:07.359029 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:18:07.359692 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:18:07.360317 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:18:07.360618 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:18:07.361153 | orchestrator | 2025-06-03 15:18:07.361586 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2025-06-03 15:18:07.362266 | orchestrator | Tuesday 03 June 2025 15:18:07 +0000 (0:00:08.758) 0:05:54.650 ********** 2025-06-03 15:18:17.833891 | orchestrator | ok: [testbed-manager] 2025-06-03 15:18:17.834007 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:18:17.834359 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:18:17.834384 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:18:17.834714 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:18:17.835429 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:18:17.835466 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:18:17.837706 | orchestrator | 2025-06-03 15:18:17.837808 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2025-06-03 15:18:17.838555 | orchestrator | Tuesday 03 June 2025 15:18:17 +0000 (0:00:10.473) 0:06:05.124 ********** 2025-06-03 15:18:18.172340 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2025-06-03 15:18:19.006801 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2025-06-03 15:18:19.007506 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2025-06-03 15:18:19.008362 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2025-06-03 15:18:19.008761 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2025-06-03 15:18:19.009617 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2025-06-03 15:18:19.010402 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2025-06-03 15:18:19.010444 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2025-06-03 15:18:19.012206 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2025-06-03 15:18:19.012686 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2025-06-03 15:18:19.013664 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2025-06-03 15:18:19.014125 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2025-06-03 15:18:19.015454 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2025-06-03 15:18:19.015531 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2025-06-03 15:18:19.016367 | orchestrator | 2025-06-03 15:18:19.016487 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2025-06-03 15:18:19.017203 | orchestrator | Tuesday 03 June 2025 15:18:18 +0000 (0:00:01.172) 0:06:06.297 ********** 2025-06-03 15:18:19.149667 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:18:19.217621 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:18:19.285300 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:18:19.350550 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:18:19.417201 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:18:19.538278 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:18:19.538491 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:18:19.539369 | orchestrator | 2025-06-03 15:18:19.540772 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2025-06-03 15:18:19.541505 | orchestrator | Tuesday 03 June 2025 15:18:19 +0000 (0:00:00.534) 0:06:06.831 ********** 2025-06-03 15:18:23.741180 | orchestrator | ok: [testbed-manager] 2025-06-03 15:18:23.741449 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:18:23.742399 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:18:23.743335 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:18:23.744802 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:18:23.746458 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:18:23.746972 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:18:23.747742 | orchestrator | 2025-06-03 15:18:23.748378 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2025-06-03 15:18:23.749055 | orchestrator | Tuesday 03 June 2025 15:18:23 +0000 (0:00:04.200) 0:06:11.032 ********** 2025-06-03 15:18:23.877153 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:18:23.943679 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:18:24.006287 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:18:24.074884 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:18:24.159264 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:18:24.264213 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:18:24.264370 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:18:24.264688 | orchestrator | 2025-06-03 15:18:24.265962 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2025-06-03 15:18:24.266867 | orchestrator | Tuesday 03 June 2025 15:18:24 +0000 (0:00:00.522) 0:06:11.555 ********** 2025-06-03 15:18:24.342607 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2025-06-03 15:18:24.342706 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2025-06-03 15:18:24.430791 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:18:24.431217 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2025-06-03 15:18:24.431908 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2025-06-03 15:18:24.505903 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:18:24.506417 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2025-06-03 15:18:24.507592 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2025-06-03 15:18:24.586835 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:18:24.587035 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2025-06-03 15:18:24.587058 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2025-06-03 15:18:24.658889 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:18:24.658993 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2025-06-03 15:18:24.659094 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2025-06-03 15:18:24.726748 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:18:24.726933 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2025-06-03 15:18:24.727629 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2025-06-03 15:18:24.831095 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:18:24.831372 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2025-06-03 15:18:24.831637 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2025-06-03 15:18:24.833345 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:18:24.833575 | orchestrator | 2025-06-03 15:18:24.834432 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2025-06-03 15:18:24.834853 | orchestrator | Tuesday 03 June 2025 15:18:24 +0000 (0:00:00.568) 0:06:12.123 ********** 2025-06-03 15:18:24.971947 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:18:25.059037 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:18:25.128608 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:18:25.206783 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:18:25.275819 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:18:25.374991 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:18:25.375092 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:18:25.375549 | orchestrator | 2025-06-03 15:18:25.376753 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2025-06-03 15:18:25.377580 | orchestrator | Tuesday 03 June 2025 15:18:25 +0000 (0:00:00.542) 0:06:12.666 ********** 2025-06-03 15:18:25.508706 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:18:25.571923 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:18:25.636119 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:18:25.707364 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:18:25.769184 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:18:25.870598 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:18:25.871604 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:18:25.871824 | orchestrator | 2025-06-03 15:18:25.872956 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2025-06-03 15:18:25.873480 | orchestrator | Tuesday 03 June 2025 15:18:25 +0000 (0:00:00.495) 0:06:13.161 ********** 2025-06-03 15:18:26.001682 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:18:26.065530 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:18:26.135558 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:18:26.381536 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:18:26.449865 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:18:26.571363 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:18:26.571467 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:18:26.572201 | orchestrator | 2025-06-03 15:18:26.573149 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2025-06-03 15:18:26.573892 | orchestrator | Tuesday 03 June 2025 15:18:26 +0000 (0:00:00.702) 0:06:13.863 ********** 2025-06-03 15:18:28.345722 | orchestrator | ok: [testbed-manager] 2025-06-03 15:18:28.345853 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:18:28.346430 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:18:28.347589 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:18:28.347944 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:18:28.350344 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:18:28.351105 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:18:28.351965 | orchestrator | 2025-06-03 15:18:28.352662 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2025-06-03 15:18:28.353370 | orchestrator | Tuesday 03 June 2025 15:18:28 +0000 (0:00:01.773) 0:06:15.637 ********** 2025-06-03 15:18:29.166576 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-03 15:18:29.167481 | orchestrator | 2025-06-03 15:18:29.167768 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2025-06-03 15:18:29.168905 | orchestrator | Tuesday 03 June 2025 15:18:29 +0000 (0:00:00.820) 0:06:16.457 ********** 2025-06-03 15:18:30.055430 | orchestrator | ok: [testbed-manager] 2025-06-03 15:18:30.056357 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:18:30.057053 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:18:30.058353 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:18:30.059287 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:18:30.061032 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:18:30.061534 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:18:30.062512 | orchestrator | 2025-06-03 15:18:30.063716 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2025-06-03 15:18:30.064410 | orchestrator | Tuesday 03 June 2025 15:18:30 +0000 (0:00:00.889) 0:06:17.346 ********** 2025-06-03 15:18:30.527367 | orchestrator | ok: [testbed-manager] 2025-06-03 15:18:30.600684 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:18:31.126670 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:18:31.126907 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:18:31.127036 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:18:31.127857 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:18:31.128592 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:18:31.129817 | orchestrator | 2025-06-03 15:18:31.130862 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2025-06-03 15:18:31.131544 | orchestrator | Tuesday 03 June 2025 15:18:31 +0000 (0:00:01.070) 0:06:18.417 ********** 2025-06-03 15:18:32.566981 | orchestrator | ok: [testbed-manager] 2025-06-03 15:18:32.569114 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:18:32.569171 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:18:32.569343 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:18:32.570984 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:18:32.571509 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:18:32.572076 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:18:32.572861 | orchestrator | 2025-06-03 15:18:32.573147 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2025-06-03 15:18:32.573666 | orchestrator | Tuesday 03 June 2025 15:18:32 +0000 (0:00:01.442) 0:06:19.859 ********** 2025-06-03 15:18:32.701209 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:18:34.113002 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:18:34.113451 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:18:34.113646 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:18:34.113952 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:18:34.115495 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:18:34.115992 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:18:34.116368 | orchestrator | 2025-06-03 15:18:34.116752 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2025-06-03 15:18:34.117113 | orchestrator | Tuesday 03 June 2025 15:18:34 +0000 (0:00:01.543) 0:06:21.403 ********** 2025-06-03 15:18:35.653580 | orchestrator | ok: [testbed-manager] 2025-06-03 15:18:35.653757 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:18:35.654258 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:18:35.656006 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:18:35.662357 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:18:35.664187 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:18:35.666383 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:18:35.667120 | orchestrator | 2025-06-03 15:18:35.667932 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2025-06-03 15:18:35.668443 | orchestrator | Tuesday 03 June 2025 15:18:35 +0000 (0:00:01.540) 0:06:22.944 ********** 2025-06-03 15:18:37.110703 | orchestrator | changed: [testbed-manager] 2025-06-03 15:18:37.110944 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:18:37.111997 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:18:37.112835 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:18:37.113399 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:18:37.113927 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:18:37.114417 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:18:37.115601 | orchestrator | 2025-06-03 15:18:37.115766 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2025-06-03 15:18:37.116096 | orchestrator | Tuesday 03 June 2025 15:18:37 +0000 (0:00:01.458) 0:06:24.403 ********** 2025-06-03 15:18:38.150493 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-03 15:18:38.150692 | orchestrator | 2025-06-03 15:18:38.150782 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2025-06-03 15:18:38.151200 | orchestrator | Tuesday 03 June 2025 15:18:38 +0000 (0:00:01.039) 0:06:25.442 ********** 2025-06-03 15:18:39.622782 | orchestrator | ok: [testbed-manager] 2025-06-03 15:18:39.623011 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:18:39.625092 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:18:39.625127 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:18:39.625572 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:18:39.626309 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:18:39.627348 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:18:39.628314 | orchestrator | 2025-06-03 15:18:39.628897 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2025-06-03 15:18:39.629608 | orchestrator | Tuesday 03 June 2025 15:18:39 +0000 (0:00:01.472) 0:06:26.915 ********** 2025-06-03 15:18:40.732442 | orchestrator | ok: [testbed-manager] 2025-06-03 15:18:40.732811 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:18:40.735594 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:18:40.735632 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:18:40.736564 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:18:40.737376 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:18:40.738108 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:18:40.739023 | orchestrator | 2025-06-03 15:18:40.739712 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2025-06-03 15:18:40.740732 | orchestrator | Tuesday 03 June 2025 15:18:40 +0000 (0:00:01.107) 0:06:28.022 ********** 2025-06-03 15:18:42.241184 | orchestrator | ok: [testbed-manager] 2025-06-03 15:18:42.243766 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:18:42.245066 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:18:42.245948 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:18:42.247060 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:18:42.247594 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:18:42.248433 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:18:42.250367 | orchestrator | 2025-06-03 15:18:42.250462 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2025-06-03 15:18:42.250774 | orchestrator | Tuesday 03 June 2025 15:18:42 +0000 (0:00:01.507) 0:06:29.529 ********** 2025-06-03 15:18:43.377891 | orchestrator | ok: [testbed-manager] 2025-06-03 15:18:43.378221 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:18:43.378833 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:18:43.379781 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:18:43.379955 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:18:43.380405 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:18:43.380852 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:18:43.381393 | orchestrator | 2025-06-03 15:18:43.381861 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2025-06-03 15:18:43.382380 | orchestrator | Tuesday 03 June 2025 15:18:43 +0000 (0:00:01.138) 0:06:30.668 ********** 2025-06-03 15:18:44.555846 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-03 15:18:44.556267 | orchestrator | 2025-06-03 15:18:44.557113 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-06-03 15:18:44.560333 | orchestrator | Tuesday 03 June 2025 15:18:44 +0000 (0:00:00.871) 0:06:31.539 ********** 2025-06-03 15:18:44.563373 | orchestrator | 2025-06-03 15:18:44.564879 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-06-03 15:18:44.565127 | orchestrator | Tuesday 03 June 2025 15:18:44 +0000 (0:00:00.037) 0:06:31.577 ********** 2025-06-03 15:18:44.565513 | orchestrator | 2025-06-03 15:18:44.566002 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-06-03 15:18:44.566458 | orchestrator | Tuesday 03 June 2025 15:18:44 +0000 (0:00:00.043) 0:06:31.621 ********** 2025-06-03 15:18:44.566853 | orchestrator | 2025-06-03 15:18:44.567483 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-06-03 15:18:44.568135 | orchestrator | Tuesday 03 June 2025 15:18:44 +0000 (0:00:00.046) 0:06:31.668 ********** 2025-06-03 15:18:44.568618 | orchestrator | 2025-06-03 15:18:44.569150 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-06-03 15:18:44.569574 | orchestrator | Tuesday 03 June 2025 15:18:44 +0000 (0:00:00.049) 0:06:31.717 ********** 2025-06-03 15:18:44.570176 | orchestrator | 2025-06-03 15:18:44.571281 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-06-03 15:18:44.571830 | orchestrator | Tuesday 03 June 2025 15:18:44 +0000 (0:00:00.049) 0:06:31.767 ********** 2025-06-03 15:18:44.572339 | orchestrator | 2025-06-03 15:18:44.572620 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-06-03 15:18:44.573156 | orchestrator | Tuesday 03 June 2025 15:18:44 +0000 (0:00:00.038) 0:06:31.806 ********** 2025-06-03 15:18:44.573890 | orchestrator | 2025-06-03 15:18:44.574473 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-06-03 15:18:44.574836 | orchestrator | Tuesday 03 June 2025 15:18:44 +0000 (0:00:00.040) 0:06:31.847 ********** 2025-06-03 15:18:45.957741 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:18:45.958960 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:18:45.959655 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:18:45.960459 | orchestrator | 2025-06-03 15:18:45.961609 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2025-06-03 15:18:45.962561 | orchestrator | Tuesday 03 June 2025 15:18:45 +0000 (0:00:01.401) 0:06:33.248 ********** 2025-06-03 15:18:47.330817 | orchestrator | changed: [testbed-manager] 2025-06-03 15:18:47.331477 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:18:47.332401 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:18:47.333474 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:18:47.335046 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:18:47.335634 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:18:47.336248 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:18:47.337364 | orchestrator | 2025-06-03 15:18:47.337635 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2025-06-03 15:18:47.338144 | orchestrator | Tuesday 03 June 2025 15:18:47 +0000 (0:00:01.373) 0:06:34.621 ********** 2025-06-03 15:18:48.442277 | orchestrator | changed: [testbed-manager] 2025-06-03 15:18:48.442505 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:18:48.443332 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:18:48.443773 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:18:48.444584 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:18:48.445605 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:18:48.445931 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:18:48.447553 | orchestrator | 2025-06-03 15:18:48.448311 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2025-06-03 15:18:48.448994 | orchestrator | Tuesday 03 June 2025 15:18:48 +0000 (0:00:01.110) 0:06:35.732 ********** 2025-06-03 15:18:48.582747 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:18:50.755836 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:18:50.757084 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:18:50.758374 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:18:50.760113 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:18:50.760725 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:18:50.761266 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:18:50.761898 | orchestrator | 2025-06-03 15:18:50.762378 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2025-06-03 15:18:50.762994 | orchestrator | Tuesday 03 June 2025 15:18:50 +0000 (0:00:02.312) 0:06:38.045 ********** 2025-06-03 15:18:50.862872 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:18:50.863047 | orchestrator | 2025-06-03 15:18:50.863652 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2025-06-03 15:18:50.864381 | orchestrator | Tuesday 03 June 2025 15:18:50 +0000 (0:00:00.110) 0:06:38.156 ********** 2025-06-03 15:18:51.895001 | orchestrator | ok: [testbed-manager] 2025-06-03 15:18:51.895599 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:18:51.896930 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:18:51.897436 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:18:51.898332 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:18:51.899080 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:18:51.900442 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:18:51.901121 | orchestrator | 2025-06-03 15:18:51.901897 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2025-06-03 15:18:51.902551 | orchestrator | Tuesday 03 June 2025 15:18:51 +0000 (0:00:01.027) 0:06:39.184 ********** 2025-06-03 15:18:52.254556 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:18:52.327662 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:18:52.391325 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:18:52.487624 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:18:52.563333 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:18:52.690824 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:18:52.690918 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:18:52.691100 | orchestrator | 2025-06-03 15:18:52.691345 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2025-06-03 15:18:52.691722 | orchestrator | Tuesday 03 June 2025 15:18:52 +0000 (0:00:00.799) 0:06:39.983 ********** 2025-06-03 15:18:53.668984 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-03 15:18:53.671262 | orchestrator | 2025-06-03 15:18:53.671301 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2025-06-03 15:18:53.673281 | orchestrator | Tuesday 03 June 2025 15:18:53 +0000 (0:00:00.977) 0:06:40.961 ********** 2025-06-03 15:18:54.104247 | orchestrator | ok: [testbed-manager] 2025-06-03 15:18:54.553651 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:18:54.554427 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:18:54.555691 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:18:54.556377 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:18:54.556732 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:18:54.557759 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:18:54.560588 | orchestrator | 2025-06-03 15:18:54.564123 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2025-06-03 15:18:54.564263 | orchestrator | Tuesday 03 June 2025 15:18:54 +0000 (0:00:00.886) 0:06:41.847 ********** 2025-06-03 15:18:57.316845 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2025-06-03 15:18:57.317490 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2025-06-03 15:18:57.317918 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2025-06-03 15:18:57.318483 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2025-06-03 15:18:57.319196 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2025-06-03 15:18:57.320711 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2025-06-03 15:18:57.321496 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2025-06-03 15:18:57.322075 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2025-06-03 15:18:57.322533 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2025-06-03 15:18:57.323883 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2025-06-03 15:18:57.323918 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2025-06-03 15:18:57.323931 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2025-06-03 15:18:57.324500 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2025-06-03 15:18:57.324971 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2025-06-03 15:18:57.325492 | orchestrator | 2025-06-03 15:18:57.325882 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2025-06-03 15:18:57.326624 | orchestrator | Tuesday 03 June 2025 15:18:57 +0000 (0:00:02.760) 0:06:44.608 ********** 2025-06-03 15:18:57.464729 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:18:57.532322 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:18:57.605601 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:18:57.671092 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:18:57.734878 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:18:57.837930 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:18:57.838407 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:18:57.839621 | orchestrator | 2025-06-03 15:18:57.840584 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2025-06-03 15:18:57.840929 | orchestrator | Tuesday 03 June 2025 15:18:57 +0000 (0:00:00.523) 0:06:45.132 ********** 2025-06-03 15:18:58.681182 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-03 15:18:58.681852 | orchestrator | 2025-06-03 15:18:58.682918 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2025-06-03 15:18:58.683984 | orchestrator | Tuesday 03 June 2025 15:18:58 +0000 (0:00:00.839) 0:06:45.971 ********** 2025-06-03 15:18:59.273830 | orchestrator | ok: [testbed-manager] 2025-06-03 15:18:59.343932 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:18:59.825719 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:18:59.825854 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:18:59.827678 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:18:59.830599 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:18:59.831328 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:18:59.831649 | orchestrator | 2025-06-03 15:18:59.832430 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2025-06-03 15:18:59.833773 | orchestrator | Tuesday 03 June 2025 15:18:59 +0000 (0:00:01.144) 0:06:47.116 ********** 2025-06-03 15:19:00.233724 | orchestrator | ok: [testbed-manager] 2025-06-03 15:19:00.664878 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:19:00.664983 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:19:00.664998 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:19:00.665071 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:19:00.665325 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:19:00.668264 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:19:00.668867 | orchestrator | 2025-06-03 15:19:00.669683 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2025-06-03 15:19:00.670415 | orchestrator | Tuesday 03 June 2025 15:19:00 +0000 (0:00:00.832) 0:06:47.949 ********** 2025-06-03 15:19:00.804686 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:19:00.873611 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:19:00.938592 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:19:01.010768 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:19:01.078273 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:19:01.180561 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:19:01.180695 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:19:01.180945 | orchestrator | 2025-06-03 15:19:01.181264 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2025-06-03 15:19:01.181648 | orchestrator | Tuesday 03 June 2025 15:19:01 +0000 (0:00:00.523) 0:06:48.472 ********** 2025-06-03 15:19:02.590754 | orchestrator | ok: [testbed-manager] 2025-06-03 15:19:02.591447 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:19:02.592845 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:19:02.596000 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:19:02.596143 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:19:02.596803 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:19:02.600042 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:19:02.600377 | orchestrator | 2025-06-03 15:19:02.601943 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2025-06-03 15:19:02.602755 | orchestrator | Tuesday 03 June 2025 15:19:02 +0000 (0:00:01.408) 0:06:49.881 ********** 2025-06-03 15:19:02.718422 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:19:02.792621 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:19:02.856055 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:19:02.922165 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:19:02.999382 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:19:03.100931 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:19:03.101786 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:19:03.103047 | orchestrator | 2025-06-03 15:19:03.104772 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2025-06-03 15:19:03.105207 | orchestrator | Tuesday 03 June 2025 15:19:03 +0000 (0:00:00.510) 0:06:50.391 ********** 2025-06-03 15:19:10.876568 | orchestrator | ok: [testbed-manager] 2025-06-03 15:19:10.876685 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:19:10.876701 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:19:10.877308 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:19:10.878662 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:19:10.879598 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:19:10.880569 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:19:10.880703 | orchestrator | 2025-06-03 15:19:10.881139 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2025-06-03 15:19:10.882540 | orchestrator | Tuesday 03 June 2025 15:19:10 +0000 (0:00:07.772) 0:06:58.164 ********** 2025-06-03 15:19:12.332951 | orchestrator | ok: [testbed-manager] 2025-06-03 15:19:12.334364 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:19:12.335938 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:19:12.336886 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:19:12.338652 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:19:12.339452 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:19:12.340497 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:19:12.341494 | orchestrator | 2025-06-03 15:19:12.342323 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2025-06-03 15:19:12.343552 | orchestrator | Tuesday 03 June 2025 15:19:12 +0000 (0:00:01.460) 0:06:59.625 ********** 2025-06-03 15:19:14.884327 | orchestrator | ok: [testbed-manager] 2025-06-03 15:19:14.884510 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:19:14.885489 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:19:14.887023 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:19:14.887564 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:19:14.888094 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:19:14.888665 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:19:14.889060 | orchestrator | 2025-06-03 15:19:14.889653 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2025-06-03 15:19:14.889995 | orchestrator | Tuesday 03 June 2025 15:19:14 +0000 (0:00:02.548) 0:07:02.173 ********** 2025-06-03 15:19:16.585617 | orchestrator | ok: [testbed-manager] 2025-06-03 15:19:16.585726 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:19:16.585834 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:19:16.586168 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:19:16.587463 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:19:16.587804 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:19:16.588614 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:19:16.589102 | orchestrator | 2025-06-03 15:19:16.589734 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-06-03 15:19:16.590280 | orchestrator | Tuesday 03 June 2025 15:19:16 +0000 (0:00:01.703) 0:07:03.876 ********** 2025-06-03 15:19:16.999163 | orchestrator | ok: [testbed-manager] 2025-06-03 15:19:17.622776 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:19:17.623912 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:19:17.624698 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:19:17.625662 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:19:17.626471 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:19:17.627401 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:19:17.628010 | orchestrator | 2025-06-03 15:19:17.628410 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-06-03 15:19:17.628748 | orchestrator | Tuesday 03 June 2025 15:19:17 +0000 (0:00:01.039) 0:07:04.916 ********** 2025-06-03 15:19:17.758756 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:19:17.857953 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:19:17.926816 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:19:17.992669 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:19:18.066503 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:19:18.457035 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:19:18.457987 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:19:18.459551 | orchestrator | 2025-06-03 15:19:18.460412 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2025-06-03 15:19:18.461579 | orchestrator | Tuesday 03 June 2025 15:19:18 +0000 (0:00:00.832) 0:07:05.748 ********** 2025-06-03 15:19:18.617495 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:19:18.684634 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:19:18.756840 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:19:18.824000 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:19:18.891856 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:19:19.030079 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:19:19.030600 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:19:19.030801 | orchestrator | 2025-06-03 15:19:19.031740 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2025-06-03 15:19:19.031770 | orchestrator | Tuesday 03 June 2025 15:19:19 +0000 (0:00:00.573) 0:07:06.322 ********** 2025-06-03 15:19:19.179323 | orchestrator | ok: [testbed-manager] 2025-06-03 15:19:19.256776 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:19:19.325783 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:19:19.406073 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:19:19.728525 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:19:19.859046 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:19:19.859143 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:19:19.859841 | orchestrator | 2025-06-03 15:19:19.860105 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2025-06-03 15:19:19.862327 | orchestrator | Tuesday 03 June 2025 15:19:19 +0000 (0:00:00.826) 0:07:07.148 ********** 2025-06-03 15:19:20.049858 | orchestrator | ok: [testbed-manager] 2025-06-03 15:19:20.114620 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:19:20.187956 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:19:20.282869 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:19:20.347758 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:19:20.448294 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:19:20.448713 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:19:20.449487 | orchestrator | 2025-06-03 15:19:20.450587 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2025-06-03 15:19:20.454402 | orchestrator | Tuesday 03 June 2025 15:19:20 +0000 (0:00:00.591) 0:07:07.740 ********** 2025-06-03 15:19:20.587675 | orchestrator | ok: [testbed-manager] 2025-06-03 15:19:20.654402 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:19:20.727163 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:19:20.792438 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:19:20.962437 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:19:20.962522 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:19:20.963391 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:19:20.967656 | orchestrator | 2025-06-03 15:19:20.967698 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2025-06-03 15:19:20.967707 | orchestrator | Tuesday 03 June 2025 15:19:20 +0000 (0:00:00.514) 0:07:08.255 ********** 2025-06-03 15:19:26.576676 | orchestrator | ok: [testbed-manager] 2025-06-03 15:19:26.576809 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:19:26.578183 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:19:26.578828 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:19:26.579775 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:19:26.580755 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:19:26.581843 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:19:26.582930 | orchestrator | 2025-06-03 15:19:26.583330 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2025-06-03 15:19:26.584114 | orchestrator | Tuesday 03 June 2025 15:19:26 +0000 (0:00:05.612) 0:07:13.867 ********** 2025-06-03 15:19:26.780836 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:19:26.844381 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:19:26.914364 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:19:26.974509 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:19:27.096914 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:19:27.097037 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:19:27.098463 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:19:27.099112 | orchestrator | 2025-06-03 15:19:27.099833 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2025-06-03 15:19:27.100801 | orchestrator | Tuesday 03 June 2025 15:19:27 +0000 (0:00:00.521) 0:07:14.389 ********** 2025-06-03 15:19:28.070915 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-03 15:19:28.071591 | orchestrator | 2025-06-03 15:19:28.072169 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2025-06-03 15:19:28.073487 | orchestrator | Tuesday 03 June 2025 15:19:28 +0000 (0:00:00.973) 0:07:15.363 ********** 2025-06-03 15:19:29.854695 | orchestrator | ok: [testbed-manager] 2025-06-03 15:19:29.854848 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:19:29.855718 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:19:29.856317 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:19:29.856913 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:19:29.858485 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:19:29.858706 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:19:29.859341 | orchestrator | 2025-06-03 15:19:29.859826 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2025-06-03 15:19:29.860364 | orchestrator | Tuesday 03 June 2025 15:19:29 +0000 (0:00:01.783) 0:07:17.146 ********** 2025-06-03 15:19:31.042771 | orchestrator | ok: [testbed-manager] 2025-06-03 15:19:31.042973 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:19:31.044625 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:19:31.045392 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:19:31.046661 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:19:31.047435 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:19:31.048318 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:19:31.048922 | orchestrator | 2025-06-03 15:19:31.049754 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2025-06-03 15:19:31.050636 | orchestrator | Tuesday 03 June 2025 15:19:31 +0000 (0:00:01.187) 0:07:18.333 ********** 2025-06-03 15:19:32.157100 | orchestrator | ok: [testbed-manager] 2025-06-03 15:19:32.157554 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:19:32.158382 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:19:32.160362 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:19:32.160408 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:19:32.160526 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:19:32.161119 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:19:32.161513 | orchestrator | 2025-06-03 15:19:32.162000 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2025-06-03 15:19:32.162486 | orchestrator | Tuesday 03 June 2025 15:19:32 +0000 (0:00:01.113) 0:07:19.446 ********** 2025-06-03 15:19:33.900666 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-06-03 15:19:33.900925 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-06-03 15:19:33.902203 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-06-03 15:19:33.903594 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-06-03 15:19:33.904187 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-06-03 15:19:33.905288 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-06-03 15:19:33.905887 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-06-03 15:19:33.906625 | orchestrator | 2025-06-03 15:19:33.907644 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2025-06-03 15:19:33.908503 | orchestrator | Tuesday 03 June 2025 15:19:33 +0000 (0:00:01.743) 0:07:21.190 ********** 2025-06-03 15:19:34.815164 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-03 15:19:34.815406 | orchestrator | 2025-06-03 15:19:34.816010 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2025-06-03 15:19:34.817476 | orchestrator | Tuesday 03 June 2025 15:19:34 +0000 (0:00:00.915) 0:07:22.105 ********** 2025-06-03 15:19:43.944095 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:19:43.944361 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:19:43.945194 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:19:43.949625 | orchestrator | changed: [testbed-manager] 2025-06-03 15:19:43.949698 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:19:43.949713 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:19:43.951862 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:19:43.952843 | orchestrator | 2025-06-03 15:19:43.953983 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2025-06-03 15:19:43.955000 | orchestrator | Tuesday 03 June 2025 15:19:43 +0000 (0:00:09.126) 0:07:31.232 ********** 2025-06-03 15:19:45.756651 | orchestrator | ok: [testbed-manager] 2025-06-03 15:19:45.757235 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:19:45.758623 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:19:45.760308 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:19:45.760331 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:19:45.761232 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:19:45.761959 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:19:45.762538 | orchestrator | 2025-06-03 15:19:45.763025 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2025-06-03 15:19:45.763903 | orchestrator | Tuesday 03 June 2025 15:19:45 +0000 (0:00:01.814) 0:07:33.046 ********** 2025-06-03 15:19:47.123629 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:19:47.123762 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:19:47.123778 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:19:47.124898 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:19:47.125788 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:19:47.126161 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:19:47.126969 | orchestrator | 2025-06-03 15:19:47.127300 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2025-06-03 15:19:47.128476 | orchestrator | Tuesday 03 June 2025 15:19:47 +0000 (0:00:01.365) 0:07:34.411 ********** 2025-06-03 15:19:48.636094 | orchestrator | changed: [testbed-manager] 2025-06-03 15:19:48.636358 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:19:48.638115 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:19:48.638143 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:19:48.639126 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:19:48.640343 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:19:48.641606 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:19:48.642897 | orchestrator | 2025-06-03 15:19:48.643113 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2025-06-03 15:19:48.644282 | orchestrator | 2025-06-03 15:19:48.644720 | orchestrator | TASK [Include hardening role] ************************************************** 2025-06-03 15:19:48.645772 | orchestrator | Tuesday 03 June 2025 15:19:48 +0000 (0:00:01.517) 0:07:35.928 ********** 2025-06-03 15:19:48.773602 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:19:48.842381 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:19:48.928070 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:19:49.004759 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:19:49.088962 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:19:49.234730 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:19:49.235402 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:19:49.237349 | orchestrator | 2025-06-03 15:19:49.237446 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2025-06-03 15:19:49.239616 | orchestrator | 2025-06-03 15:19:49.241308 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2025-06-03 15:19:49.245491 | orchestrator | Tuesday 03 June 2025 15:19:49 +0000 (0:00:00.599) 0:07:36.528 ********** 2025-06-03 15:19:50.651534 | orchestrator | changed: [testbed-manager] 2025-06-03 15:19:50.652553 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:19:50.654385 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:19:50.655004 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:19:50.655546 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:19:50.656813 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:19:50.657575 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:19:50.659072 | orchestrator | 2025-06-03 15:19:50.659851 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2025-06-03 15:19:50.660179 | orchestrator | Tuesday 03 June 2025 15:19:50 +0000 (0:00:01.415) 0:07:37.943 ********** 2025-06-03 15:19:52.364243 | orchestrator | ok: [testbed-manager] 2025-06-03 15:19:52.367564 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:19:52.367603 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:19:52.367615 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:19:52.373222 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:19:52.373916 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:19:52.374272 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:19:52.377723 | orchestrator | 2025-06-03 15:19:52.377775 | orchestrator | TASK [Include auditd role] ***************************************************** 2025-06-03 15:19:52.379781 | orchestrator | Tuesday 03 June 2025 15:19:52 +0000 (0:00:01.710) 0:07:39.653 ********** 2025-06-03 15:19:52.737028 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:19:52.804781 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:19:52.886457 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:19:52.946593 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:19:53.014583 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:19:53.449088 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:19:53.449602 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:19:53.454441 | orchestrator | 2025-06-03 15:19:53.454546 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2025-06-03 15:19:53.455921 | orchestrator | Tuesday 03 June 2025 15:19:53 +0000 (0:00:01.085) 0:07:40.738 ********** 2025-06-03 15:19:54.744795 | orchestrator | changed: [testbed-manager] 2025-06-03 15:19:54.745709 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:19:54.746966 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:19:54.748017 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:19:54.748280 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:19:54.749637 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:19:54.750408 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:19:54.750789 | orchestrator | 2025-06-03 15:19:54.751589 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2025-06-03 15:19:54.751963 | orchestrator | 2025-06-03 15:19:54.753015 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2025-06-03 15:19:54.753351 | orchestrator | Tuesday 03 June 2025 15:19:54 +0000 (0:00:01.296) 0:07:42.035 ********** 2025-06-03 15:19:55.808816 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-03 15:19:55.810355 | orchestrator | 2025-06-03 15:19:55.811261 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-06-03 15:19:55.813000 | orchestrator | Tuesday 03 June 2025 15:19:55 +0000 (0:00:01.060) 0:07:43.096 ********** 2025-06-03 15:19:56.235004 | orchestrator | ok: [testbed-manager] 2025-06-03 15:19:56.672317 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:19:56.672861 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:19:56.674394 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:19:56.675780 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:19:56.677884 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:19:56.678406 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:19:56.679333 | orchestrator | 2025-06-03 15:19:56.679709 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-06-03 15:19:56.680882 | orchestrator | Tuesday 03 June 2025 15:19:56 +0000 (0:00:00.864) 0:07:43.960 ********** 2025-06-03 15:19:57.799930 | orchestrator | changed: [testbed-manager] 2025-06-03 15:19:57.801065 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:19:57.806170 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:19:57.812490 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:19:57.814529 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:19:57.815657 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:19:57.816123 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:19:57.817810 | orchestrator | 2025-06-03 15:19:57.818691 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2025-06-03 15:19:57.819419 | orchestrator | Tuesday 03 June 2025 15:19:57 +0000 (0:00:01.129) 0:07:45.090 ********** 2025-06-03 15:19:58.915718 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-03 15:19:58.916345 | orchestrator | 2025-06-03 15:19:58.917617 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-06-03 15:19:58.917678 | orchestrator | Tuesday 03 June 2025 15:19:58 +0000 (0:00:01.115) 0:07:46.205 ********** 2025-06-03 15:19:59.773918 | orchestrator | ok: [testbed-manager] 2025-06-03 15:19:59.776612 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:19:59.777938 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:19:59.779566 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:19:59.780665 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:19:59.781404 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:19:59.781900 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:19:59.782584 | orchestrator | 2025-06-03 15:19:59.783094 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-06-03 15:19:59.783886 | orchestrator | Tuesday 03 June 2025 15:19:59 +0000 (0:00:00.857) 0:07:47.062 ********** 2025-06-03 15:20:00.266487 | orchestrator | changed: [testbed-manager] 2025-06-03 15:20:00.965406 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:20:00.965686 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:20:00.966667 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:20:00.968575 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:20:00.970247 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:20:00.971369 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:20:00.972668 | orchestrator | 2025-06-03 15:20:00.974984 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-03 15:20:00.975065 | orchestrator | 2025-06-03 15:20:00 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-03 15:20:00.975083 | orchestrator | 2025-06-03 15:20:00 | INFO  | Please wait and do not abort execution. 2025-06-03 15:20:00.975793 | orchestrator | testbed-manager : ok=162  changed=38  unreachable=0 failed=0 skipped=41  rescued=0 ignored=0 2025-06-03 15:20:00.976701 | orchestrator | testbed-node-0 : ok=170  changed=66  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-06-03 15:20:00.977651 | orchestrator | testbed-node-1 : ok=170  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-06-03 15:20:00.979087 | orchestrator | testbed-node-2 : ok=170  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-06-03 15:20:00.979825 | orchestrator | testbed-node-3 : ok=169  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-06-03 15:20:00.980892 | orchestrator | testbed-node-4 : ok=169  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-06-03 15:20:00.981788 | orchestrator | testbed-node-5 : ok=169  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-06-03 15:20:00.983175 | orchestrator | 2025-06-03 15:20:00.983644 | orchestrator | 2025-06-03 15:20:00.984632 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-03 15:20:00.984935 | orchestrator | Tuesday 03 June 2025 15:20:00 +0000 (0:00:01.193) 0:07:48.256 ********** 2025-06-03 15:20:00.986323 | orchestrator | =============================================================================== 2025-06-03 15:20:00.986721 | orchestrator | osism.commons.packages : Install required packages --------------------- 76.45s 2025-06-03 15:20:00.987765 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 35.65s 2025-06-03 15:20:00.988658 | orchestrator | osism.commons.packages : Download required packages -------------------- 35.06s 2025-06-03 15:20:00.989348 | orchestrator | osism.commons.repository : Update package cache ------------------------ 13.44s 2025-06-03 15:20:00.990241 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 12.11s 2025-06-03 15:20:00.990739 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 11.57s 2025-06-03 15:20:00.991512 | orchestrator | osism.services.docker : Install docker package ------------------------- 10.47s 2025-06-03 15:20:00.991832 | orchestrator | osism.services.docker : Install containerd package ---------------------- 9.46s 2025-06-03 15:20:00.992765 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 9.13s 2025-06-03 15:20:00.993302 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 8.76s 2025-06-03 15:20:00.993821 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 8.57s 2025-06-03 15:20:00.995718 | orchestrator | osism.services.rng : Install rng package -------------------------------- 8.40s 2025-06-03 15:20:00.996289 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 7.77s 2025-06-03 15:20:00.996773 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 7.76s 2025-06-03 15:20:00.997455 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 7.22s 2025-06-03 15:20:00.998170 | orchestrator | osism.services.docker : Add repository ---------------------------------- 7.09s 2025-06-03 15:20:00.998728 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 5.87s 2025-06-03 15:20:00.999264 | orchestrator | osism.commons.cleanup : Populate service facts -------------------------- 5.79s 2025-06-03 15:20:01.000058 | orchestrator | osism.services.chrony : Populate service facts -------------------------- 5.61s 2025-06-03 15:20:01.000494 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 5.60s 2025-06-03 15:20:01.765715 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-06-03 15:20:01.765842 | orchestrator | + osism apply network 2025-06-03 15:20:04.122787 | orchestrator | Registering Redlock._acquired_script 2025-06-03 15:20:04.122866 | orchestrator | Registering Redlock._extend_script 2025-06-03 15:20:04.122878 | orchestrator | Registering Redlock._release_script 2025-06-03 15:20:04.189764 | orchestrator | 2025-06-03 15:20:04 | INFO  | Task e6788921-94e0-4d9d-b61b-3bc9a77fa3e1 (network) was prepared for execution. 2025-06-03 15:20:04.189868 | orchestrator | 2025-06-03 15:20:04 | INFO  | It takes a moment until task e6788921-94e0-4d9d-b61b-3bc9a77fa3e1 (network) has been started and output is visible here. 2025-06-03 15:20:08.639011 | orchestrator | 2025-06-03 15:20:08.640680 | orchestrator | PLAY [Apply role network] ****************************************************** 2025-06-03 15:20:08.641707 | orchestrator | 2025-06-03 15:20:08.642848 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2025-06-03 15:20:08.643883 | orchestrator | Tuesday 03 June 2025 15:20:08 +0000 (0:00:00.269) 0:00:00.269 ********** 2025-06-03 15:20:08.805801 | orchestrator | ok: [testbed-manager] 2025-06-03 15:20:08.891361 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:20:08.967367 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:20:09.045848 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:20:09.234872 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:20:09.365869 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:20:09.367475 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:20:09.368001 | orchestrator | 2025-06-03 15:20:09.368845 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2025-06-03 15:20:09.369910 | orchestrator | Tuesday 03 June 2025 15:20:09 +0000 (0:00:00.728) 0:00:00.997 ********** 2025-06-03 15:20:10.546959 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-03 15:20:10.547450 | orchestrator | 2025-06-03 15:20:10.548337 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2025-06-03 15:20:10.549051 | orchestrator | Tuesday 03 June 2025 15:20:10 +0000 (0:00:01.177) 0:00:02.174 ********** 2025-06-03 15:20:12.748020 | orchestrator | ok: [testbed-manager] 2025-06-03 15:20:12.748569 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:20:12.751437 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:20:12.751468 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:20:12.752458 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:20:12.752929 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:20:12.753715 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:20:12.754330 | orchestrator | 2025-06-03 15:20:12.755633 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2025-06-03 15:20:12.755827 | orchestrator | Tuesday 03 June 2025 15:20:12 +0000 (0:00:02.205) 0:00:04.380 ********** 2025-06-03 15:20:14.528848 | orchestrator | ok: [testbed-manager] 2025-06-03 15:20:14.529484 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:20:14.529554 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:20:14.529627 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:20:14.529946 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:20:14.531725 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:20:14.531821 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:20:14.532254 | orchestrator | 2025-06-03 15:20:14.532561 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2025-06-03 15:20:14.532849 | orchestrator | Tuesday 03 June 2025 15:20:14 +0000 (0:00:01.779) 0:00:06.160 ********** 2025-06-03 15:20:15.077833 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2025-06-03 15:20:15.078353 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2025-06-03 15:20:15.079131 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2025-06-03 15:20:15.541709 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2025-06-03 15:20:15.541917 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2025-06-03 15:20:15.542887 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2025-06-03 15:20:15.543858 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2025-06-03 15:20:15.544314 | orchestrator | 2025-06-03 15:20:15.545501 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2025-06-03 15:20:15.551418 | orchestrator | Tuesday 03 June 2025 15:20:15 +0000 (0:00:01.015) 0:00:07.175 ********** 2025-06-03 15:20:18.768996 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-06-03 15:20:18.769162 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-06-03 15:20:18.769841 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-03 15:20:18.770597 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-06-03 15:20:18.770936 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-06-03 15:20:18.771733 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-03 15:20:18.772122 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-06-03 15:20:18.773263 | orchestrator | 2025-06-03 15:20:18.773609 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2025-06-03 15:20:18.775634 | orchestrator | Tuesday 03 June 2025 15:20:18 +0000 (0:00:03.221) 0:00:10.397 ********** 2025-06-03 15:20:20.283718 | orchestrator | changed: [testbed-manager] 2025-06-03 15:20:20.287540 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:20:20.287600 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:20:20.288397 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:20:20.289704 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:20:20.290424 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:20:20.291651 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:20:20.292433 | orchestrator | 2025-06-03 15:20:20.293163 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2025-06-03 15:20:20.294138 | orchestrator | Tuesday 03 June 2025 15:20:20 +0000 (0:00:01.518) 0:00:11.915 ********** 2025-06-03 15:20:22.286227 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-03 15:20:22.291852 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-03 15:20:22.293262 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-06-03 15:20:22.293292 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-06-03 15:20:22.298286 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-06-03 15:20:22.300995 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-06-03 15:20:22.303707 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-06-03 15:20:22.306281 | orchestrator | 2025-06-03 15:20:22.307418 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2025-06-03 15:20:22.308778 | orchestrator | Tuesday 03 June 2025 15:20:22 +0000 (0:00:02.002) 0:00:13.918 ********** 2025-06-03 15:20:22.784274 | orchestrator | ok: [testbed-manager] 2025-06-03 15:20:22.866804 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:20:23.434156 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:20:23.435116 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:20:23.435482 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:20:23.436727 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:20:23.437425 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:20:23.438406 | orchestrator | 2025-06-03 15:20:23.439156 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2025-06-03 15:20:23.440131 | orchestrator | Tuesday 03 June 2025 15:20:23 +0000 (0:00:01.145) 0:00:15.064 ********** 2025-06-03 15:20:23.605991 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:20:23.692323 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:20:23.782779 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:20:23.874960 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:20:23.958914 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:20:24.112393 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:20:24.112938 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:20:24.113768 | orchestrator | 2025-06-03 15:20:24.114588 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2025-06-03 15:20:24.115560 | orchestrator | Tuesday 03 June 2025 15:20:24 +0000 (0:00:00.679) 0:00:15.743 ********** 2025-06-03 15:20:26.174961 | orchestrator | ok: [testbed-manager] 2025-06-03 15:20:26.176087 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:20:26.179864 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:20:26.179893 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:20:26.179903 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:20:26.180581 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:20:26.182470 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:20:26.182700 | orchestrator | 2025-06-03 15:20:26.184393 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2025-06-03 15:20:26.185066 | orchestrator | Tuesday 03 June 2025 15:20:26 +0000 (0:00:02.061) 0:00:17.804 ********** 2025-06-03 15:20:26.433591 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:20:26.543497 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:20:26.626175 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:20:26.706633 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:20:27.094169 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:20:27.095289 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:20:27.095597 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2025-06-03 15:20:27.096720 | orchestrator | 2025-06-03 15:20:27.097727 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2025-06-03 15:20:27.098673 | orchestrator | Tuesday 03 June 2025 15:20:27 +0000 (0:00:00.919) 0:00:18.724 ********** 2025-06-03 15:20:28.788397 | orchestrator | ok: [testbed-manager] 2025-06-03 15:20:28.788915 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:20:28.790095 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:20:28.790591 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:20:28.791126 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:20:28.791699 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:20:28.792367 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:20:28.792959 | orchestrator | 2025-06-03 15:20:28.793688 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2025-06-03 15:20:28.794286 | orchestrator | Tuesday 03 June 2025 15:20:28 +0000 (0:00:01.693) 0:00:20.418 ********** 2025-06-03 15:20:30.074545 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-03 15:20:30.075022 | orchestrator | 2025-06-03 15:20:30.076074 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-06-03 15:20:30.076959 | orchestrator | Tuesday 03 June 2025 15:20:30 +0000 (0:00:01.286) 0:00:21.705 ********** 2025-06-03 15:20:30.642459 | orchestrator | ok: [testbed-manager] 2025-06-03 15:20:31.085260 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:20:31.087759 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:20:31.090328 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:20:31.092118 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:20:31.092437 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:20:31.094385 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:20:31.095174 | orchestrator | 2025-06-03 15:20:31.096039 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2025-06-03 15:20:31.096807 | orchestrator | Tuesday 03 June 2025 15:20:31 +0000 (0:00:01.012) 0:00:22.717 ********** 2025-06-03 15:20:31.463137 | orchestrator | ok: [testbed-manager] 2025-06-03 15:20:31.549656 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:20:31.640607 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:20:31.738153 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:20:31.833642 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:20:31.972032 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:20:31.972856 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:20:31.973509 | orchestrator | 2025-06-03 15:20:31.974111 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-06-03 15:20:31.974977 | orchestrator | Tuesday 03 June 2025 15:20:31 +0000 (0:00:00.889) 0:00:23.607 ********** 2025-06-03 15:20:32.395016 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2025-06-03 15:20:32.395148 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2025-06-03 15:20:32.496793 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2025-06-03 15:20:32.497104 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2025-06-03 15:20:33.170699 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2025-06-03 15:20:33.172696 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2025-06-03 15:20:33.172726 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2025-06-03 15:20:33.174320 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2025-06-03 15:20:33.175852 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2025-06-03 15:20:33.177607 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2025-06-03 15:20:33.178925 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2025-06-03 15:20:33.180068 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2025-06-03 15:20:33.181321 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2025-06-03 15:20:33.182560 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2025-06-03 15:20:33.183759 | orchestrator | 2025-06-03 15:20:33.184703 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2025-06-03 15:20:33.185466 | orchestrator | Tuesday 03 June 2025 15:20:33 +0000 (0:00:01.190) 0:00:24.798 ********** 2025-06-03 15:20:33.337134 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:20:33.420087 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:20:33.506645 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:20:33.593609 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:20:33.675624 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:20:33.813468 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:20:33.814346 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:20:33.815771 | orchestrator | 2025-06-03 15:20:33.816541 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2025-06-03 15:20:33.819644 | orchestrator | Tuesday 03 June 2025 15:20:33 +0000 (0:00:00.649) 0:00:25.448 ********** 2025-06-03 15:20:37.309623 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-node-1, testbed-node-0, testbed-manager, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-03 15:20:37.310724 | orchestrator | 2025-06-03 15:20:37.312911 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2025-06-03 15:20:37.313797 | orchestrator | Tuesday 03 June 2025 15:20:37 +0000 (0:00:03.490) 0:00:28.938 ********** 2025-06-03 15:20:42.332933 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-06-03 15:20:42.334097 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-06-03 15:20:42.335968 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-06-03 15:20:42.337944 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-06-03 15:20:42.339035 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-06-03 15:20:42.340509 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-06-03 15:20:42.342180 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-06-03 15:20:42.343352 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-06-03 15:20:42.344463 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-06-03 15:20:42.345743 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-06-03 15:20:42.346601 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-06-03 15:20:42.347891 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-06-03 15:20:42.349031 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-06-03 15:20:42.350640 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-06-03 15:20:42.351387 | orchestrator | 2025-06-03 15:20:42.352308 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2025-06-03 15:20:42.352908 | orchestrator | Tuesday 03 June 2025 15:20:42 +0000 (0:00:05.023) 0:00:33.962 ********** 2025-06-03 15:20:47.160682 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-06-03 15:20:47.163847 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-06-03 15:20:47.163945 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-06-03 15:20:47.164701 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-06-03 15:20:47.165892 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-06-03 15:20:47.168002 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-06-03 15:20:47.168057 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-06-03 15:20:47.173242 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-06-03 15:20:47.173828 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-06-03 15:20:47.175934 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-06-03 15:20:47.176670 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-06-03 15:20:47.178417 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-06-03 15:20:47.178459 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-06-03 15:20:47.179617 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-06-03 15:20:47.180749 | orchestrator | 2025-06-03 15:20:47.181372 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2025-06-03 15:20:47.182245 | orchestrator | Tuesday 03 June 2025 15:20:47 +0000 (0:00:04.831) 0:00:38.793 ********** 2025-06-03 15:20:48.479719 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-03 15:20:48.481299 | orchestrator | 2025-06-03 15:20:48.481615 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-06-03 15:20:48.483407 | orchestrator | Tuesday 03 June 2025 15:20:48 +0000 (0:00:01.315) 0:00:40.108 ********** 2025-06-03 15:20:48.960041 | orchestrator | ok: [testbed-manager] 2025-06-03 15:20:49.701157 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:20:49.702677 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:20:49.702715 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:20:49.703912 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:20:49.705550 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:20:49.705973 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:20:49.707145 | orchestrator | 2025-06-03 15:20:49.708617 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-06-03 15:20:49.709515 | orchestrator | Tuesday 03 June 2025 15:20:49 +0000 (0:00:01.223) 0:00:41.332 ********** 2025-06-03 15:20:49.794822 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2025-06-03 15:20:49.795231 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-06-03 15:20:49.795266 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2025-06-03 15:20:49.795932 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-06-03 15:20:49.914391 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:20:49.915058 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2025-06-03 15:20:49.916602 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-06-03 15:20:49.917877 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2025-06-03 15:20:49.921774 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-06-03 15:20:50.018503 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:20:50.019656 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2025-06-03 15:20:50.020790 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-06-03 15:20:50.021378 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2025-06-03 15:20:50.022297 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-06-03 15:20:50.130969 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:20:50.132856 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2025-06-03 15:20:50.134747 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-06-03 15:20:50.136588 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2025-06-03 15:20:50.138240 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-06-03 15:20:50.242973 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:20:50.244013 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2025-06-03 15:20:50.245143 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-06-03 15:20:50.246711 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2025-06-03 15:20:50.248149 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-06-03 15:20:50.323980 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:20:50.324117 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2025-06-03 15:20:50.324617 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-06-03 15:20:50.325359 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2025-06-03 15:20:51.789105 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-06-03 15:20:51.789835 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:20:51.791362 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2025-06-03 15:20:51.792644 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-06-03 15:20:51.793675 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2025-06-03 15:20:51.794361 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-06-03 15:20:51.794946 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:20:51.795873 | orchestrator | 2025-06-03 15:20:51.796656 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2025-06-03 15:20:51.797088 | orchestrator | Tuesday 03 June 2025 15:20:51 +0000 (0:00:02.087) 0:00:43.419 ********** 2025-06-03 15:20:51.952148 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:20:52.037264 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:20:52.126236 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:20:52.238173 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:20:52.324775 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:20:52.444269 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:20:52.444400 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:20:52.444417 | orchestrator | 2025-06-03 15:20:52.444431 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2025-06-03 15:20:52.444445 | orchestrator | Tuesday 03 June 2025 15:20:52 +0000 (0:00:00.653) 0:00:44.072 ********** 2025-06-03 15:20:52.644870 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:20:52.730509 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:20:53.013598 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:20:53.105633 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:20:53.197518 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:20:53.245224 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:20:53.245657 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:20:53.246542 | orchestrator | 2025-06-03 15:20:53.247571 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-03 15:20:53.248048 | orchestrator | 2025-06-03 15:20:53 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-03 15:20:53.248512 | orchestrator | 2025-06-03 15:20:53 | INFO  | Please wait and do not abort execution. 2025-06-03 15:20:53.249703 | orchestrator | testbed-manager : ok=21  changed=5  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-03 15:20:53.250251 | orchestrator | testbed-node-0 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-03 15:20:53.251213 | orchestrator | testbed-node-1 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-03 15:20:53.252505 | orchestrator | testbed-node-2 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-03 15:20:53.253121 | orchestrator | testbed-node-3 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-03 15:20:53.254336 | orchestrator | testbed-node-4 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-03 15:20:53.255247 | orchestrator | testbed-node-5 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-03 15:20:53.256316 | orchestrator | 2025-06-03 15:20:53.256992 | orchestrator | 2025-06-03 15:20:53.257784 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-03 15:20:53.258441 | orchestrator | Tuesday 03 June 2025 15:20:53 +0000 (0:00:00.807) 0:00:44.880 ********** 2025-06-03 15:20:53.258937 | orchestrator | =============================================================================== 2025-06-03 15:20:53.259798 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 5.02s 2025-06-03 15:20:53.260560 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 4.83s 2025-06-03 15:20:53.260847 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 3.49s 2025-06-03 15:20:53.261895 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.22s 2025-06-03 15:20:53.262991 | orchestrator | osism.commons.network : Install required packages ----------------------- 2.21s 2025-06-03 15:20:53.263891 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 2.09s 2025-06-03 15:20:53.265075 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.06s 2025-06-03 15:20:53.266327 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 2.00s 2025-06-03 15:20:53.267293 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.78s 2025-06-03 15:20:53.267996 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.69s 2025-06-03 15:20:53.269001 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.52s 2025-06-03 15:20:53.270120 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.32s 2025-06-03 15:20:53.272083 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.29s 2025-06-03 15:20:53.272741 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.22s 2025-06-03 15:20:53.273580 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.19s 2025-06-03 15:20:53.274057 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.18s 2025-06-03 15:20:53.274481 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.15s 2025-06-03 15:20:53.275208 | orchestrator | osism.commons.network : Create required directories --------------------- 1.02s 2025-06-03 15:20:53.275598 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.01s 2025-06-03 15:20:53.276524 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 0.92s 2025-06-03 15:20:53.917658 | orchestrator | + osism apply wireguard 2025-06-03 15:20:55.637429 | orchestrator | Registering Redlock._acquired_script 2025-06-03 15:20:55.637560 | orchestrator | Registering Redlock._extend_script 2025-06-03 15:20:55.637578 | orchestrator | Registering Redlock._release_script 2025-06-03 15:20:55.698577 | orchestrator | 2025-06-03 15:20:55 | INFO  | Task 9ed0995c-1ebb-4e4c-bb50-f7926611ef58 (wireguard) was prepared for execution. 2025-06-03 15:20:55.698666 | orchestrator | 2025-06-03 15:20:55 | INFO  | It takes a moment until task 9ed0995c-1ebb-4e4c-bb50-f7926611ef58 (wireguard) has been started and output is visible here. 2025-06-03 15:20:59.746624 | orchestrator | 2025-06-03 15:20:59.748322 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2025-06-03 15:20:59.748423 | orchestrator | 2025-06-03 15:20:59.748885 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2025-06-03 15:20:59.749349 | orchestrator | Tuesday 03 June 2025 15:20:59 +0000 (0:00:00.249) 0:00:00.249 ********** 2025-06-03 15:21:01.350267 | orchestrator | ok: [testbed-manager] 2025-06-03 15:21:01.350560 | orchestrator | 2025-06-03 15:21:01.351079 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2025-06-03 15:21:01.351703 | orchestrator | Tuesday 03 June 2025 15:21:01 +0000 (0:00:01.605) 0:00:01.854 ********** 2025-06-03 15:21:07.776693 | orchestrator | changed: [testbed-manager] 2025-06-03 15:21:07.777797 | orchestrator | 2025-06-03 15:21:07.778779 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2025-06-03 15:21:07.780324 | orchestrator | Tuesday 03 June 2025 15:21:07 +0000 (0:00:06.425) 0:00:08.280 ********** 2025-06-03 15:21:08.356969 | orchestrator | changed: [testbed-manager] 2025-06-03 15:21:08.357135 | orchestrator | 2025-06-03 15:21:08.357385 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2025-06-03 15:21:08.357921 | orchestrator | Tuesday 03 June 2025 15:21:08 +0000 (0:00:00.580) 0:00:08.860 ********** 2025-06-03 15:21:08.772627 | orchestrator | changed: [testbed-manager] 2025-06-03 15:21:08.773652 | orchestrator | 2025-06-03 15:21:08.774266 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2025-06-03 15:21:08.775082 | orchestrator | Tuesday 03 June 2025 15:21:08 +0000 (0:00:00.417) 0:00:09.278 ********** 2025-06-03 15:21:09.304830 | orchestrator | ok: [testbed-manager] 2025-06-03 15:21:09.305346 | orchestrator | 2025-06-03 15:21:09.306303 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2025-06-03 15:21:09.307266 | orchestrator | Tuesday 03 June 2025 15:21:09 +0000 (0:00:00.531) 0:00:09.810 ********** 2025-06-03 15:21:09.834351 | orchestrator | ok: [testbed-manager] 2025-06-03 15:21:09.834610 | orchestrator | 2025-06-03 15:21:09.836270 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2025-06-03 15:21:09.837454 | orchestrator | Tuesday 03 June 2025 15:21:09 +0000 (0:00:00.527) 0:00:10.338 ********** 2025-06-03 15:21:10.247871 | orchestrator | ok: [testbed-manager] 2025-06-03 15:21:10.247974 | orchestrator | 2025-06-03 15:21:10.248652 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2025-06-03 15:21:10.249403 | orchestrator | Tuesday 03 June 2025 15:21:10 +0000 (0:00:00.414) 0:00:10.752 ********** 2025-06-03 15:21:11.452046 | orchestrator | changed: [testbed-manager] 2025-06-03 15:21:11.452808 | orchestrator | 2025-06-03 15:21:11.453456 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2025-06-03 15:21:11.454989 | orchestrator | Tuesday 03 June 2025 15:21:11 +0000 (0:00:01.201) 0:00:11.954 ********** 2025-06-03 15:21:12.427597 | orchestrator | changed: [testbed-manager] => (item=None) 2025-06-03 15:21:12.428065 | orchestrator | changed: [testbed-manager] 2025-06-03 15:21:12.429893 | orchestrator | 2025-06-03 15:21:12.431429 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2025-06-03 15:21:12.432645 | orchestrator | Tuesday 03 June 2025 15:21:12 +0000 (0:00:00.976) 0:00:12.931 ********** 2025-06-03 15:21:14.175379 | orchestrator | changed: [testbed-manager] 2025-06-03 15:21:14.176282 | orchestrator | 2025-06-03 15:21:14.177085 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2025-06-03 15:21:14.178954 | orchestrator | Tuesday 03 June 2025 15:21:14 +0000 (0:00:01.747) 0:00:14.679 ********** 2025-06-03 15:21:15.109065 | orchestrator | changed: [testbed-manager] 2025-06-03 15:21:15.110591 | orchestrator | 2025-06-03 15:21:15.111862 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-03 15:21:15.112103 | orchestrator | 2025-06-03 15:21:15 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-03 15:21:15.112835 | orchestrator | 2025-06-03 15:21:15 | INFO  | Please wait and do not abort execution. 2025-06-03 15:21:15.113796 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-03 15:21:15.114713 | orchestrator | 2025-06-03 15:21:15.115398 | orchestrator | 2025-06-03 15:21:15.116255 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-03 15:21:15.117799 | orchestrator | Tuesday 03 June 2025 15:21:15 +0000 (0:00:00.934) 0:00:15.614 ********** 2025-06-03 15:21:15.118520 | orchestrator | =============================================================================== 2025-06-03 15:21:15.119584 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 6.43s 2025-06-03 15:21:15.120452 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.75s 2025-06-03 15:21:15.121135 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.61s 2025-06-03 15:21:15.121771 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.20s 2025-06-03 15:21:15.122223 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.98s 2025-06-03 15:21:15.122801 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.93s 2025-06-03 15:21:15.123072 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.58s 2025-06-03 15:21:15.123871 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.53s 2025-06-03 15:21:15.124297 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.53s 2025-06-03 15:21:15.124767 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.42s 2025-06-03 15:21:15.125286 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.41s 2025-06-03 15:21:15.739928 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2025-06-03 15:21:15.771426 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2025-06-03 15:21:15.771513 | orchestrator | Dload Upload Total Spent Left Speed 2025-06-03 15:21:15.848385 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 14 100 14 0 0 181 0 --:--:-- --:--:-- --:--:-- 181 2025-06-03 15:21:15.863443 | orchestrator | + osism apply --environment custom workarounds 2025-06-03 15:21:17.553067 | orchestrator | 2025-06-03 15:21:17 | INFO  | Trying to run play workarounds in environment custom 2025-06-03 15:21:17.557846 | orchestrator | Registering Redlock._acquired_script 2025-06-03 15:21:17.557900 | orchestrator | Registering Redlock._extend_script 2025-06-03 15:21:17.557947 | orchestrator | Registering Redlock._release_script 2025-06-03 15:21:17.616423 | orchestrator | 2025-06-03 15:21:17 | INFO  | Task ca4b97fb-cc41-4149-abb5-ebd5d33fe65b (workarounds) was prepared for execution. 2025-06-03 15:21:17.616526 | orchestrator | 2025-06-03 15:21:17 | INFO  | It takes a moment until task ca4b97fb-cc41-4149-abb5-ebd5d33fe65b (workarounds) has been started and output is visible here. 2025-06-03 15:21:21.622992 | orchestrator | 2025-06-03 15:21:21.623648 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-03 15:21:21.626212 | orchestrator | 2025-06-03 15:21:21.626978 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2025-06-03 15:21:21.629308 | orchestrator | Tuesday 03 June 2025 15:21:21 +0000 (0:00:00.162) 0:00:00.162 ********** 2025-06-03 15:21:21.840615 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2025-06-03 15:21:21.927275 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2025-06-03 15:21:22.016974 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2025-06-03 15:21:22.101640 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2025-06-03 15:21:22.326199 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2025-06-03 15:21:22.499752 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2025-06-03 15:21:22.499956 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2025-06-03 15:21:22.500960 | orchestrator | 2025-06-03 15:21:22.501070 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2025-06-03 15:21:22.501841 | orchestrator | 2025-06-03 15:21:22.502155 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-06-03 15:21:22.502958 | orchestrator | Tuesday 03 June 2025 15:21:22 +0000 (0:00:00.878) 0:00:01.041 ********** 2025-06-03 15:21:24.859306 | orchestrator | ok: [testbed-manager] 2025-06-03 15:21:24.859387 | orchestrator | 2025-06-03 15:21:24.859642 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2025-06-03 15:21:24.860287 | orchestrator | 2025-06-03 15:21:24.860852 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-06-03 15:21:24.862167 | orchestrator | Tuesday 03 June 2025 15:21:24 +0000 (0:00:02.349) 0:00:03.391 ********** 2025-06-03 15:21:26.692836 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:21:26.693920 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:21:26.694795 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:21:26.696229 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:21:26.697460 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:21:26.698579 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:21:26.699417 | orchestrator | 2025-06-03 15:21:26.700304 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2025-06-03 15:21:26.700831 | orchestrator | 2025-06-03 15:21:26.701147 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2025-06-03 15:21:26.702078 | orchestrator | Tuesday 03 June 2025 15:21:26 +0000 (0:00:01.839) 0:00:05.230 ********** 2025-06-03 15:21:28.338521 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-06-03 15:21:28.338799 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-06-03 15:21:28.339552 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-06-03 15:21:28.340889 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-06-03 15:21:28.341987 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-06-03 15:21:28.342582 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-06-03 15:21:28.343806 | orchestrator | 2025-06-03 15:21:28.344407 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2025-06-03 15:21:28.344936 | orchestrator | Tuesday 03 June 2025 15:21:28 +0000 (0:00:01.644) 0:00:06.874 ********** 2025-06-03 15:21:32.244930 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:21:32.245231 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:21:32.247753 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:21:32.251288 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:21:32.252001 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:21:32.252475 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:21:32.253599 | orchestrator | 2025-06-03 15:21:32.254898 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2025-06-03 15:21:32.255388 | orchestrator | Tuesday 03 June 2025 15:21:32 +0000 (0:00:03.908) 0:00:10.782 ********** 2025-06-03 15:21:32.417509 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:21:32.500560 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:21:32.578283 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:21:32.658170 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:21:32.990247 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:21:32.992329 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:21:32.992627 | orchestrator | 2025-06-03 15:21:32.994489 | orchestrator | PLAY [Add a workaround service] ************************************************ 2025-06-03 15:21:32.996048 | orchestrator | 2025-06-03 15:21:32.997216 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2025-06-03 15:21:32.998614 | orchestrator | Tuesday 03 June 2025 15:21:32 +0000 (0:00:00.747) 0:00:11.530 ********** 2025-06-03 15:21:34.647359 | orchestrator | changed: [testbed-manager] 2025-06-03 15:21:34.647647 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:21:34.648494 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:21:34.651625 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:21:34.651700 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:21:34.653591 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:21:34.653647 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:21:34.653657 | orchestrator | 2025-06-03 15:21:34.653766 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2025-06-03 15:21:34.654292 | orchestrator | Tuesday 03 June 2025 15:21:34 +0000 (0:00:01.656) 0:00:13.186 ********** 2025-06-03 15:21:36.360283 | orchestrator | changed: [testbed-manager] 2025-06-03 15:21:36.360927 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:21:36.362668 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:21:36.363289 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:21:36.364124 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:21:36.365057 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:21:36.366108 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:21:36.367165 | orchestrator | 2025-06-03 15:21:36.367915 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2025-06-03 15:21:36.368260 | orchestrator | Tuesday 03 June 2025 15:21:36 +0000 (0:00:01.709) 0:00:14.896 ********** 2025-06-03 15:21:37.935882 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:21:37.937086 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:21:37.938116 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:21:37.939411 | orchestrator | ok: [testbed-manager] 2025-06-03 15:21:37.941138 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:21:37.942384 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:21:37.942681 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:21:37.943749 | orchestrator | 2025-06-03 15:21:37.944410 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2025-06-03 15:21:37.945375 | orchestrator | Tuesday 03 June 2025 15:21:37 +0000 (0:00:01.578) 0:00:16.475 ********** 2025-06-03 15:21:40.101268 | orchestrator | changed: [testbed-manager] 2025-06-03 15:21:40.106547 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:21:40.106992 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:21:40.108337 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:21:40.109362 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:21:40.110314 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:21:40.111305 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:21:40.112531 | orchestrator | 2025-06-03 15:21:40.113004 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2025-06-03 15:21:40.113399 | orchestrator | Tuesday 03 June 2025 15:21:40 +0000 (0:00:02.162) 0:00:18.637 ********** 2025-06-03 15:21:40.280990 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:21:40.370080 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:21:40.452356 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:21:40.528064 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:21:40.610676 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:21:40.734740 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:21:40.736113 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:21:40.737224 | orchestrator | 2025-06-03 15:21:40.738701 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2025-06-03 15:21:40.738933 | orchestrator | 2025-06-03 15:21:40.739972 | orchestrator | TASK [Install python3-docker] ************************************************** 2025-06-03 15:21:40.740614 | orchestrator | Tuesday 03 June 2025 15:21:40 +0000 (0:00:00.636) 0:00:19.274 ********** 2025-06-03 15:21:43.505483 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:21:43.505590 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:21:43.507181 | orchestrator | ok: [testbed-manager] 2025-06-03 15:21:43.507684 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:21:43.511135 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:21:43.512995 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:21:43.513640 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:21:43.514335 | orchestrator | 2025-06-03 15:21:43.515212 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-03 15:21:43.515876 | orchestrator | 2025-06-03 15:21:43 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-03 15:21:43.517015 | orchestrator | 2025-06-03 15:21:43 | INFO  | Please wait and do not abort execution. 2025-06-03 15:21:43.517163 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-03 15:21:43.518099 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-03 15:21:43.518493 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-03 15:21:43.519180 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-03 15:21:43.519931 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-03 15:21:43.520328 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-03 15:21:43.521919 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-03 15:21:43.522944 | orchestrator | 2025-06-03 15:21:43.523747 | orchestrator | 2025-06-03 15:21:43.524921 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-03 15:21:43.526278 | orchestrator | Tuesday 03 June 2025 15:21:43 +0000 (0:00:02.772) 0:00:22.046 ********** 2025-06-03 15:21:43.526647 | orchestrator | =============================================================================== 2025-06-03 15:21:43.527057 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.91s 2025-06-03 15:21:43.527928 | orchestrator | Install python3-docker -------------------------------------------------- 2.77s 2025-06-03 15:21:43.528359 | orchestrator | Apply netplan configuration --------------------------------------------- 2.35s 2025-06-03 15:21:43.528778 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 2.16s 2025-06-03 15:21:43.529688 | orchestrator | Apply netplan configuration --------------------------------------------- 1.84s 2025-06-03 15:21:43.530648 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.71s 2025-06-03 15:21:43.531612 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.66s 2025-06-03 15:21:43.532340 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.64s 2025-06-03 15:21:43.532963 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.58s 2025-06-03 15:21:43.533619 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.88s 2025-06-03 15:21:43.534329 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.75s 2025-06-03 15:21:43.535356 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.64s 2025-06-03 15:21:44.190948 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2025-06-03 15:21:45.872072 | orchestrator | Registering Redlock._acquired_script 2025-06-03 15:21:45.872160 | orchestrator | Registering Redlock._extend_script 2025-06-03 15:21:45.872173 | orchestrator | Registering Redlock._release_script 2025-06-03 15:21:45.946277 | orchestrator | 2025-06-03 15:21:45 | INFO  | Task 66290d82-00e8-408b-a35a-0b560146152d (reboot) was prepared for execution. 2025-06-03 15:21:45.946357 | orchestrator | 2025-06-03 15:21:45 | INFO  | It takes a moment until task 66290d82-00e8-408b-a35a-0b560146152d (reboot) has been started and output is visible here. 2025-06-03 15:21:50.048008 | orchestrator | 2025-06-03 15:21:50.048302 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-06-03 15:21:50.049535 | orchestrator | 2025-06-03 15:21:50.051883 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-06-03 15:21:50.052535 | orchestrator | Tuesday 03 June 2025 15:21:50 +0000 (0:00:00.252) 0:00:00.252 ********** 2025-06-03 15:21:50.142864 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:21:50.143143 | orchestrator | 2025-06-03 15:21:50.144192 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-06-03 15:21:50.145632 | orchestrator | Tuesday 03 June 2025 15:21:50 +0000 (0:00:00.095) 0:00:00.348 ********** 2025-06-03 15:21:51.048442 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:21:51.049439 | orchestrator | 2025-06-03 15:21:51.050260 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-06-03 15:21:51.053757 | orchestrator | Tuesday 03 June 2025 15:21:51 +0000 (0:00:00.905) 0:00:01.253 ********** 2025-06-03 15:21:51.163250 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:21:51.163409 | orchestrator | 2025-06-03 15:21:51.164070 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-06-03 15:21:51.164687 | orchestrator | 2025-06-03 15:21:51.165476 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-06-03 15:21:51.167082 | orchestrator | Tuesday 03 June 2025 15:21:51 +0000 (0:00:00.113) 0:00:01.367 ********** 2025-06-03 15:21:51.265497 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:21:51.266539 | orchestrator | 2025-06-03 15:21:51.266921 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-06-03 15:21:51.267848 | orchestrator | Tuesday 03 June 2025 15:21:51 +0000 (0:00:00.103) 0:00:01.471 ********** 2025-06-03 15:21:51.914516 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:21:51.914788 | orchestrator | 2025-06-03 15:21:51.915546 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-06-03 15:21:51.916863 | orchestrator | Tuesday 03 June 2025 15:21:51 +0000 (0:00:00.645) 0:00:02.117 ********** 2025-06-03 15:21:52.023034 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:21:52.025356 | orchestrator | 2025-06-03 15:21:52.025517 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-06-03 15:21:52.025828 | orchestrator | 2025-06-03 15:21:52.027969 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-06-03 15:21:52.029413 | orchestrator | Tuesday 03 June 2025 15:21:52 +0000 (0:00:00.107) 0:00:02.224 ********** 2025-06-03 15:21:52.253985 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:21:52.254245 | orchestrator | 2025-06-03 15:21:52.254489 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-06-03 15:21:52.255169 | orchestrator | Tuesday 03 June 2025 15:21:52 +0000 (0:00:00.229) 0:00:02.454 ********** 2025-06-03 15:21:52.887431 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:21:52.887535 | orchestrator | 2025-06-03 15:21:52.887941 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-06-03 15:21:52.888368 | orchestrator | Tuesday 03 June 2025 15:21:52 +0000 (0:00:00.639) 0:00:03.093 ********** 2025-06-03 15:21:53.013392 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:21:53.013855 | orchestrator | 2025-06-03 15:21:53.014983 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-06-03 15:21:53.015810 | orchestrator | 2025-06-03 15:21:53.016822 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-06-03 15:21:53.017339 | orchestrator | Tuesday 03 June 2025 15:21:53 +0000 (0:00:00.122) 0:00:03.216 ********** 2025-06-03 15:21:53.117837 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:21:53.117938 | orchestrator | 2025-06-03 15:21:53.119016 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-06-03 15:21:53.119255 | orchestrator | Tuesday 03 June 2025 15:21:53 +0000 (0:00:00.106) 0:00:03.322 ********** 2025-06-03 15:21:53.792106 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:21:53.792373 | orchestrator | 2025-06-03 15:21:53.793180 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-06-03 15:21:53.793870 | orchestrator | Tuesday 03 June 2025 15:21:53 +0000 (0:00:00.676) 0:00:03.999 ********** 2025-06-03 15:21:53.899463 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:21:53.899981 | orchestrator | 2025-06-03 15:21:53.900626 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-06-03 15:21:53.902655 | orchestrator | 2025-06-03 15:21:53.903138 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-06-03 15:21:53.904176 | orchestrator | Tuesday 03 June 2025 15:21:53 +0000 (0:00:00.104) 0:00:04.103 ********** 2025-06-03 15:21:54.007539 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:21:54.007725 | orchestrator | 2025-06-03 15:21:54.008556 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-06-03 15:21:54.009318 | orchestrator | Tuesday 03 June 2025 15:21:53 +0000 (0:00:00.109) 0:00:04.213 ********** 2025-06-03 15:21:54.678267 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:21:54.678724 | orchestrator | 2025-06-03 15:21:54.679629 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-06-03 15:21:54.680292 | orchestrator | Tuesday 03 June 2025 15:21:54 +0000 (0:00:00.669) 0:00:04.883 ********** 2025-06-03 15:21:54.790268 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:21:54.790573 | orchestrator | 2025-06-03 15:21:54.791702 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-06-03 15:21:54.792370 | orchestrator | 2025-06-03 15:21:54.793341 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-06-03 15:21:54.793947 | orchestrator | Tuesday 03 June 2025 15:21:54 +0000 (0:00:00.109) 0:00:04.992 ********** 2025-06-03 15:21:54.922520 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:21:54.922685 | orchestrator | 2025-06-03 15:21:54.923422 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-06-03 15:21:54.924722 | orchestrator | Tuesday 03 June 2025 15:21:54 +0000 (0:00:00.135) 0:00:05.128 ********** 2025-06-03 15:21:55.602869 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:21:55.603481 | orchestrator | 2025-06-03 15:21:55.605498 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-06-03 15:21:55.605545 | orchestrator | Tuesday 03 June 2025 15:21:55 +0000 (0:00:00.678) 0:00:05.807 ********** 2025-06-03 15:21:55.636167 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:21:55.636360 | orchestrator | 2025-06-03 15:21:55.637580 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-03 15:21:55.637655 | orchestrator | 2025-06-03 15:21:55 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-03 15:21:55.637673 | orchestrator | 2025-06-03 15:21:55 | INFO  | Please wait and do not abort execution. 2025-06-03 15:21:55.638802 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-03 15:21:55.638848 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-03 15:21:55.639573 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-03 15:21:55.639698 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-03 15:21:55.640444 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-03 15:21:55.641025 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-03 15:21:55.641723 | orchestrator | 2025-06-03 15:21:55.642427 | orchestrator | 2025-06-03 15:21:55.643456 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-03 15:21:55.645367 | orchestrator | Tuesday 03 June 2025 15:21:55 +0000 (0:00:00.036) 0:00:05.843 ********** 2025-06-03 15:21:55.646646 | orchestrator | =============================================================================== 2025-06-03 15:21:55.647408 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.22s 2025-06-03 15:21:55.647703 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.78s 2025-06-03 15:21:55.651046 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.59s 2025-06-03 15:21:56.255776 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2025-06-03 15:21:57.978739 | orchestrator | Registering Redlock._acquired_script 2025-06-03 15:21:57.978836 | orchestrator | Registering Redlock._extend_script 2025-06-03 15:21:57.978851 | orchestrator | Registering Redlock._release_script 2025-06-03 15:21:58.040954 | orchestrator | 2025-06-03 15:21:58 | INFO  | Task b0fc4384-dbd8-40ac-a0ac-eae080243fc1 (wait-for-connection) was prepared for execution. 2025-06-03 15:21:58.041010 | orchestrator | 2025-06-03 15:21:58 | INFO  | It takes a moment until task b0fc4384-dbd8-40ac-a0ac-eae080243fc1 (wait-for-connection) has been started and output is visible here. 2025-06-03 15:22:02.168525 | orchestrator | 2025-06-03 15:22:02.169757 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2025-06-03 15:22:02.169814 | orchestrator | 2025-06-03 15:22:02.170468 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2025-06-03 15:22:02.171984 | orchestrator | Tuesday 03 June 2025 15:22:02 +0000 (0:00:00.238) 0:00:00.238 ********** 2025-06-03 15:22:14.673093 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:22:14.673372 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:22:14.673396 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:22:14.673401 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:22:14.673767 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:22:14.674164 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:22:14.674721 | orchestrator | 2025-06-03 15:22:14.675640 | orchestrator | 2025-06-03 15:22:14 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-03 15:22:14.676401 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-03 15:22:14.676452 | orchestrator | 2025-06-03 15:22:14 | INFO  | Please wait and do not abort execution. 2025-06-03 15:22:14.676503 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-03 15:22:14.679219 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-03 15:22:14.680181 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-03 15:22:14.680684 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-03 15:22:14.681127 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-03 15:22:14.681618 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-03 15:22:14.682117 | orchestrator | 2025-06-03 15:22:14.682565 | orchestrator | 2025-06-03 15:22:14.682957 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-03 15:22:14.683529 | orchestrator | Tuesday 03 June 2025 15:22:14 +0000 (0:00:12.499) 0:00:12.738 ********** 2025-06-03 15:22:14.683844 | orchestrator | =============================================================================== 2025-06-03 15:22:14.684329 | orchestrator | Wait until remote system is reachable ---------------------------------- 12.50s 2025-06-03 15:22:15.270331 | orchestrator | + osism apply hddtemp 2025-06-03 15:22:17.091533 | orchestrator | Registering Redlock._acquired_script 2025-06-03 15:22:17.091630 | orchestrator | Registering Redlock._extend_script 2025-06-03 15:22:17.091643 | orchestrator | Registering Redlock._release_script 2025-06-03 15:22:17.153815 | orchestrator | 2025-06-03 15:22:17 | INFO  | Task c3891b63-07e6-4f03-a801-cd8859dec855 (hddtemp) was prepared for execution. 2025-06-03 15:22:17.153877 | orchestrator | 2025-06-03 15:22:17 | INFO  | It takes a moment until task c3891b63-07e6-4f03-a801-cd8859dec855 (hddtemp) has been started and output is visible here. 2025-06-03 15:22:21.468056 | orchestrator | 2025-06-03 15:22:21.468636 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2025-06-03 15:22:21.471415 | orchestrator | 2025-06-03 15:22:21.471452 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2025-06-03 15:22:21.472520 | orchestrator | Tuesday 03 June 2025 15:22:21 +0000 (0:00:00.275) 0:00:00.276 ********** 2025-06-03 15:22:21.625278 | orchestrator | ok: [testbed-manager] 2025-06-03 15:22:21.709782 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:22:21.778842 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:22:21.858946 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:22:22.076532 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:22:22.223898 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:22:22.224026 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:22:22.224049 | orchestrator | 2025-06-03 15:22:22.224069 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2025-06-03 15:22:22.224717 | orchestrator | Tuesday 03 June 2025 15:22:22 +0000 (0:00:00.750) 0:00:01.026 ********** 2025-06-03 15:22:23.458997 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-03 15:22:23.462584 | orchestrator | 2025-06-03 15:22:23.462659 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2025-06-03 15:22:23.462675 | orchestrator | Tuesday 03 June 2025 15:22:23 +0000 (0:00:01.238) 0:00:02.265 ********** 2025-06-03 15:22:25.539660 | orchestrator | ok: [testbed-manager] 2025-06-03 15:22:25.539761 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:22:25.539775 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:22:25.539787 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:22:25.539798 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:22:25.539810 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:22:25.541261 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:22:25.541300 | orchestrator | 2025-06-03 15:22:25.541323 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2025-06-03 15:22:25.541635 | orchestrator | Tuesday 03 June 2025 15:22:25 +0000 (0:00:02.079) 0:00:04.344 ********** 2025-06-03 15:22:26.088338 | orchestrator | changed: [testbed-manager] 2025-06-03 15:22:26.169780 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:22:26.591895 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:22:26.592372 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:22:26.593927 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:22:26.596297 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:22:26.596711 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:22:26.597720 | orchestrator | 2025-06-03 15:22:26.598376 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2025-06-03 15:22:26.598714 | orchestrator | Tuesday 03 June 2025 15:22:26 +0000 (0:00:01.055) 0:00:05.400 ********** 2025-06-03 15:22:27.639102 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:22:27.640533 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:22:27.640653 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:22:27.641807 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:22:27.642351 | orchestrator | ok: [testbed-manager] 2025-06-03 15:22:27.643156 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:22:27.643693 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:22:27.644528 | orchestrator | 2025-06-03 15:22:27.645013 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2025-06-03 15:22:27.645505 | orchestrator | Tuesday 03 June 2025 15:22:27 +0000 (0:00:01.048) 0:00:06.448 ********** 2025-06-03 15:22:28.005904 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:22:28.097416 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:22:28.166306 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:22:28.240881 | orchestrator | changed: [testbed-manager] 2025-06-03 15:22:28.358826 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:22:28.359162 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:22:28.360515 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:22:28.361343 | orchestrator | 2025-06-03 15:22:28.362499 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2025-06-03 15:22:28.363186 | orchestrator | Tuesday 03 June 2025 15:22:28 +0000 (0:00:00.718) 0:00:07.167 ********** 2025-06-03 15:22:41.239499 | orchestrator | changed: [testbed-manager] 2025-06-03 15:22:41.239617 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:22:41.241635 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:22:41.243663 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:22:41.243792 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:22:41.244499 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:22:41.245054 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:22:41.246844 | orchestrator | 2025-06-03 15:22:41.249818 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2025-06-03 15:22:41.250370 | orchestrator | Tuesday 03 June 2025 15:22:41 +0000 (0:00:12.878) 0:00:20.045 ********** 2025-06-03 15:22:42.690249 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-03 15:22:42.690954 | orchestrator | 2025-06-03 15:22:42.696001 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2025-06-03 15:22:42.696792 | orchestrator | Tuesday 03 June 2025 15:22:42 +0000 (0:00:01.451) 0:00:21.496 ********** 2025-06-03 15:22:44.526617 | orchestrator | changed: [testbed-manager] 2025-06-03 15:22:44.526699 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:22:44.526873 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:22:44.527484 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:22:44.528392 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:22:44.528797 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:22:44.530532 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:22:44.531199 | orchestrator | 2025-06-03 15:22:44.531545 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-03 15:22:44.531797 | orchestrator | 2025-06-03 15:22:44 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-03 15:22:44.532328 | orchestrator | 2025-06-03 15:22:44 | INFO  | Please wait and do not abort execution. 2025-06-03 15:22:44.532650 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-03 15:22:44.533663 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-03 15:22:44.534002 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-03 15:22:44.534734 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-03 15:22:44.535369 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-03 15:22:44.535970 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-03 15:22:44.536685 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-03 15:22:44.537424 | orchestrator | 2025-06-03 15:22:44.537857 | orchestrator | 2025-06-03 15:22:44.538656 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-03 15:22:44.539043 | orchestrator | Tuesday 03 June 2025 15:22:44 +0000 (0:00:01.840) 0:00:23.337 ********** 2025-06-03 15:22:44.540343 | orchestrator | =============================================================================== 2025-06-03 15:22:44.540672 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 12.88s 2025-06-03 15:22:44.541388 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 2.08s 2025-06-03 15:22:44.541702 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.84s 2025-06-03 15:22:44.542456 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.45s 2025-06-03 15:22:44.542919 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.24s 2025-06-03 15:22:44.543271 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.06s 2025-06-03 15:22:44.543796 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.05s 2025-06-03 15:22:44.544230 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.75s 2025-06-03 15:22:44.544582 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.72s 2025-06-03 15:22:44.896327 | orchestrator | ++ semver latest 7.1.1 2025-06-03 15:22:44.943671 | orchestrator | + [[ -1 -ge 0 ]] 2025-06-03 15:22:44.943768 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-06-03 15:22:44.943785 | orchestrator | + sudo systemctl restart manager.service 2025-06-03 15:22:58.448532 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-06-03 15:22:58.448607 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-06-03 15:22:58.448642 | orchestrator | + local max_attempts=60 2025-06-03 15:22:58.448655 | orchestrator | + local name=ceph-ansible 2025-06-03 15:22:58.448668 | orchestrator | + local attempt_num=1 2025-06-03 15:22:58.448680 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-03 15:22:58.488088 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-06-03 15:22:58.488154 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-03 15:22:58.488167 | orchestrator | + sleep 5 2025-06-03 15:23:03.492563 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-03 15:23:03.524248 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-06-03 15:23:03.524335 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-03 15:23:03.524357 | orchestrator | + sleep 5 2025-06-03 15:23:08.528040 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-03 15:23:08.561669 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-06-03 15:23:08.561763 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-03 15:23:08.561778 | orchestrator | + sleep 5 2025-06-03 15:23:13.566148 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-03 15:23:13.615933 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-06-03 15:23:13.616036 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-03 15:23:13.616093 | orchestrator | + sleep 5 2025-06-03 15:23:18.620059 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-03 15:23:18.657215 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-06-03 15:23:18.657442 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-03 15:23:18.657461 | orchestrator | + sleep 5 2025-06-03 15:23:23.660897 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-03 15:23:23.698508 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-06-03 15:23:23.698599 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-03 15:23:23.698612 | orchestrator | + sleep 5 2025-06-03 15:23:28.703147 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-03 15:23:28.738002 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-06-03 15:23:28.738100 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-03 15:23:28.738114 | orchestrator | + sleep 5 2025-06-03 15:23:33.742695 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-03 15:23:33.796866 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-06-03 15:23:33.797759 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-03 15:23:33.797788 | orchestrator | + sleep 5 2025-06-03 15:23:38.802058 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-03 15:23:38.839468 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-06-03 15:23:38.839517 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-03 15:23:38.839523 | orchestrator | + sleep 5 2025-06-03 15:23:43.842842 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-03 15:23:43.879890 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-06-03 15:23:43.879951 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-03 15:23:43.879957 | orchestrator | + sleep 5 2025-06-03 15:23:48.885501 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-03 15:23:48.924344 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-06-03 15:23:48.924478 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-03 15:23:48.924493 | orchestrator | + sleep 5 2025-06-03 15:23:53.930066 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-03 15:23:53.961168 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-06-03 15:23:53.961260 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-03 15:23:53.961274 | orchestrator | + sleep 5 2025-06-03 15:23:58.964906 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-03 15:23:59.000210 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-06-03 15:23:59.000296 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-03 15:23:59.000317 | orchestrator | + sleep 5 2025-06-03 15:24:04.004582 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-03 15:24:04.044351 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-06-03 15:24:04.044483 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-06-03 15:24:04.044507 | orchestrator | + local max_attempts=60 2025-06-03 15:24:04.044527 | orchestrator | + local name=kolla-ansible 2025-06-03 15:24:04.044546 | orchestrator | + local attempt_num=1 2025-06-03 15:24:04.044812 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-06-03 15:24:04.084944 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-06-03 15:24:04.085044 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-06-03 15:24:04.085059 | orchestrator | + local max_attempts=60 2025-06-03 15:24:04.085072 | orchestrator | + local name=osism-ansible 2025-06-03 15:24:04.085084 | orchestrator | + local attempt_num=1 2025-06-03 15:24:04.085169 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-06-03 15:24:04.126543 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-06-03 15:24:04.126630 | orchestrator | + [[ true == \t\r\u\e ]] 2025-06-03 15:24:04.126644 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-06-03 15:24:04.320426 | orchestrator | ARA in ceph-ansible already disabled. 2025-06-03 15:24:04.499806 | orchestrator | ARA in kolla-ansible already disabled. 2025-06-03 15:24:04.647957 | orchestrator | ARA in osism-ansible already disabled. 2025-06-03 15:24:04.829428 | orchestrator | ARA in osism-kubernetes already disabled. 2025-06-03 15:24:04.832747 | orchestrator | + osism apply gather-facts 2025-06-03 15:24:06.623776 | orchestrator | Registering Redlock._acquired_script 2025-06-03 15:24:06.623885 | orchestrator | Registering Redlock._extend_script 2025-06-03 15:24:06.623899 | orchestrator | Registering Redlock._release_script 2025-06-03 15:24:06.689254 | orchestrator | 2025-06-03 15:24:06 | INFO  | Task aba3c2e5-7bdc-497b-b98d-c8b8a71ff5bf (gather-facts) was prepared for execution. 2025-06-03 15:24:06.689345 | orchestrator | 2025-06-03 15:24:06 | INFO  | It takes a moment until task aba3c2e5-7bdc-497b-b98d-c8b8a71ff5bf (gather-facts) has been started and output is visible here. 2025-06-03 15:24:10.702738 | orchestrator | 2025-06-03 15:24:10.704129 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-06-03 15:24:10.707280 | orchestrator | 2025-06-03 15:24:10.709684 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-06-03 15:24:10.710634 | orchestrator | Tuesday 03 June 2025 15:24:10 +0000 (0:00:00.223) 0:00:00.223 ********** 2025-06-03 15:24:16.590756 | orchestrator | ok: [testbed-manager] 2025-06-03 15:24:16.593468 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:24:16.593516 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:24:16.593537 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:24:16.593556 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:24:16.593574 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:24:16.593592 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:24:16.596434 | orchestrator | 2025-06-03 15:24:16.596996 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-06-03 15:24:16.597290 | orchestrator | 2025-06-03 15:24:16.599953 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-06-03 15:24:16.600311 | orchestrator | Tuesday 03 June 2025 15:24:16 +0000 (0:00:05.891) 0:00:06.115 ********** 2025-06-03 15:24:16.752751 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:24:16.829725 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:24:16.909278 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:24:16.990007 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:24:17.066894 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:24:17.111559 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:24:17.114212 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:24:17.115480 | orchestrator | 2025-06-03 15:24:17.117312 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-03 15:24:17.118815 | orchestrator | 2025-06-03 15:24:17 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-03 15:24:17.119739 | orchestrator | 2025-06-03 15:24:17 | INFO  | Please wait and do not abort execution. 2025-06-03 15:24:17.121234 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-03 15:24:17.122552 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-03 15:24:17.123868 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-03 15:24:17.125443 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-03 15:24:17.126359 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-03 15:24:17.127313 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-03 15:24:17.128155 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-03 15:24:17.129308 | orchestrator | 2025-06-03 15:24:17.129406 | orchestrator | 2025-06-03 15:24:17.130068 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-03 15:24:17.130977 | orchestrator | Tuesday 03 June 2025 15:24:17 +0000 (0:00:00.524) 0:00:06.639 ********** 2025-06-03 15:24:17.131634 | orchestrator | =============================================================================== 2025-06-03 15:24:17.132482 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.89s 2025-06-03 15:24:17.133500 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.52s 2025-06-03 15:24:17.807868 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2025-06-03 15:24:17.823625 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2025-06-03 15:24:17.834668 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2025-06-03 15:24:17.852680 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2025-06-03 15:24:17.864844 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2025-06-03 15:24:17.881187 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2025-06-03 15:24:17.896738 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2025-06-03 15:24:17.915189 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2025-06-03 15:24:17.936533 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2025-06-03 15:24:17.955238 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2025-06-03 15:24:17.968414 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2025-06-03 15:24:17.985517 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2025-06-03 15:24:18.003442 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2025-06-03 15:24:18.022230 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2025-06-03 15:24:18.038530 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2025-06-03 15:24:18.062853 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2025-06-03 15:24:18.080875 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2025-06-03 15:24:18.102203 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2025-06-03 15:24:18.117695 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2025-06-03 15:24:18.129071 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2025-06-03 15:24:18.144230 | orchestrator | + [[ false == \t\r\u\e ]] 2025-06-03 15:24:18.616831 | orchestrator | ok: Runtime: 0:19:59.713193 2025-06-03 15:24:18.719851 | 2025-06-03 15:24:18.719996 | TASK [Deploy services] 2025-06-03 15:24:19.254820 | orchestrator | skipping: Conditional result was False 2025-06-03 15:24:19.272354 | 2025-06-03 15:24:19.272522 | TASK [Deploy in a nutshell] 2025-06-03 15:24:19.990935 | orchestrator | + set -e 2025-06-03 15:24:19.992532 | orchestrator | 2025-06-03 15:24:19.992586 | orchestrator | # PULL IMAGES 2025-06-03 15:24:19.992601 | orchestrator | 2025-06-03 15:24:19.992623 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-06-03 15:24:19.992644 | orchestrator | ++ export INTERACTIVE=false 2025-06-03 15:24:19.992658 | orchestrator | ++ INTERACTIVE=false 2025-06-03 15:24:19.992703 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-06-03 15:24:19.992726 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-06-03 15:24:19.992740 | orchestrator | + source /opt/manager-vars.sh 2025-06-03 15:24:19.992752 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-06-03 15:24:19.992770 | orchestrator | ++ NUMBER_OF_NODES=6 2025-06-03 15:24:19.992782 | orchestrator | ++ export CEPH_VERSION=reef 2025-06-03 15:24:19.992800 | orchestrator | ++ CEPH_VERSION=reef 2025-06-03 15:24:19.992812 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-06-03 15:24:19.992830 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-06-03 15:24:19.992841 | orchestrator | ++ export MANAGER_VERSION=latest 2025-06-03 15:24:19.992855 | orchestrator | ++ MANAGER_VERSION=latest 2025-06-03 15:24:19.992867 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-06-03 15:24:19.992880 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-06-03 15:24:19.992891 | orchestrator | ++ export ARA=false 2025-06-03 15:24:19.992902 | orchestrator | ++ ARA=false 2025-06-03 15:24:19.992913 | orchestrator | ++ export DEPLOY_MODE=manager 2025-06-03 15:24:19.992924 | orchestrator | ++ DEPLOY_MODE=manager 2025-06-03 15:24:19.992934 | orchestrator | ++ export TEMPEST=false 2025-06-03 15:24:19.992945 | orchestrator | ++ TEMPEST=false 2025-06-03 15:24:19.992956 | orchestrator | ++ export IS_ZUUL=true 2025-06-03 15:24:19.992967 | orchestrator | ++ IS_ZUUL=true 2025-06-03 15:24:19.992977 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.16 2025-06-03 15:24:19.992989 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.16 2025-06-03 15:24:19.992999 | orchestrator | ++ export EXTERNAL_API=false 2025-06-03 15:24:19.993010 | orchestrator | ++ EXTERNAL_API=false 2025-06-03 15:24:19.993021 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-06-03 15:24:19.993032 | orchestrator | ++ IMAGE_USER=ubuntu 2025-06-03 15:24:19.993043 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-06-03 15:24:19.993053 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-06-03 15:24:19.993065 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-06-03 15:24:19.993084 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-06-03 15:24:19.993095 | orchestrator | + echo 2025-06-03 15:24:19.993106 | orchestrator | + echo '# PULL IMAGES' 2025-06-03 15:24:19.993117 | orchestrator | + echo 2025-06-03 15:24:19.993136 | orchestrator | ++ semver latest 7.0.0 2025-06-03 15:24:20.050535 | orchestrator | + [[ -1 -ge 0 ]] 2025-06-03 15:24:20.050639 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-06-03 15:24:20.050658 | orchestrator | + osism apply -r 2 -e custom pull-images 2025-06-03 15:24:21.707012 | orchestrator | 2025-06-03 15:24:21 | INFO  | Trying to run play pull-images in environment custom 2025-06-03 15:24:21.711007 | orchestrator | Registering Redlock._acquired_script 2025-06-03 15:24:21.711043 | orchestrator | Registering Redlock._extend_script 2025-06-03 15:24:21.711055 | orchestrator | Registering Redlock._release_script 2025-06-03 15:24:21.770664 | orchestrator | 2025-06-03 15:24:21 | INFO  | Task 0275be6a-2914-4f47-b999-86ad557ff9e5 (pull-images) was prepared for execution. 2025-06-03 15:24:21.770755 | orchestrator | 2025-06-03 15:24:21 | INFO  | It takes a moment until task 0275be6a-2914-4f47-b999-86ad557ff9e5 (pull-images) has been started and output is visible here. 2025-06-03 15:24:25.719201 | orchestrator | 2025-06-03 15:24:25.719429 | orchestrator | PLAY [Pull images] ************************************************************* 2025-06-03 15:24:25.719460 | orchestrator | 2025-06-03 15:24:25.719481 | orchestrator | TASK [Pull keystone image] ***************************************************** 2025-06-03 15:24:25.719517 | orchestrator | Tuesday 03 June 2025 15:24:25 +0000 (0:00:00.170) 0:00:00.170 ********** 2025-06-03 15:25:32.660157 | orchestrator | changed: [testbed-manager] 2025-06-03 15:25:32.660282 | orchestrator | 2025-06-03 15:25:32.660301 | orchestrator | TASK [Pull other images] ******************************************************* 2025-06-03 15:25:32.660381 | orchestrator | Tuesday 03 June 2025 15:25:32 +0000 (0:01:06.945) 0:01:07.115 ********** 2025-06-03 15:26:24.363740 | orchestrator | changed: [testbed-manager] => (item=aodh) 2025-06-03 15:26:24.363833 | orchestrator | changed: [testbed-manager] => (item=barbican) 2025-06-03 15:26:24.364827 | orchestrator | changed: [testbed-manager] => (item=ceilometer) 2025-06-03 15:26:24.365154 | orchestrator | changed: [testbed-manager] => (item=cinder) 2025-06-03 15:26:24.366587 | orchestrator | changed: [testbed-manager] => (item=common) 2025-06-03 15:26:24.367296 | orchestrator | changed: [testbed-manager] => (item=designate) 2025-06-03 15:26:24.368088 | orchestrator | changed: [testbed-manager] => (item=glance) 2025-06-03 15:26:24.368825 | orchestrator | changed: [testbed-manager] => (item=grafana) 2025-06-03 15:26:24.369110 | orchestrator | changed: [testbed-manager] => (item=horizon) 2025-06-03 15:26:24.369593 | orchestrator | changed: [testbed-manager] => (item=ironic) 2025-06-03 15:26:24.370257 | orchestrator | changed: [testbed-manager] => (item=loadbalancer) 2025-06-03 15:26:24.370460 | orchestrator | changed: [testbed-manager] => (item=magnum) 2025-06-03 15:26:24.371029 | orchestrator | changed: [testbed-manager] => (item=mariadb) 2025-06-03 15:26:24.371819 | orchestrator | changed: [testbed-manager] => (item=memcached) 2025-06-03 15:26:24.371969 | orchestrator | changed: [testbed-manager] => (item=neutron) 2025-06-03 15:26:24.372540 | orchestrator | changed: [testbed-manager] => (item=nova) 2025-06-03 15:26:24.372877 | orchestrator | changed: [testbed-manager] => (item=octavia) 2025-06-03 15:26:24.373737 | orchestrator | changed: [testbed-manager] => (item=opensearch) 2025-06-03 15:26:24.373907 | orchestrator | changed: [testbed-manager] => (item=openvswitch) 2025-06-03 15:26:24.374441 | orchestrator | changed: [testbed-manager] => (item=ovn) 2025-06-03 15:26:24.374885 | orchestrator | changed: [testbed-manager] => (item=placement) 2025-06-03 15:26:24.375087 | orchestrator | changed: [testbed-manager] => (item=rabbitmq) 2025-06-03 15:26:24.375493 | orchestrator | changed: [testbed-manager] => (item=redis) 2025-06-03 15:26:24.375834 | orchestrator | changed: [testbed-manager] => (item=skyline) 2025-06-03 15:26:24.376091 | orchestrator | 2025-06-03 15:26:24.376455 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-03 15:26:24.376644 | orchestrator | 2025-06-03 15:26:24 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-03 15:26:24.376746 | orchestrator | 2025-06-03 15:26:24 | INFO  | Please wait and do not abort execution. 2025-06-03 15:26:24.377092 | orchestrator | testbed-manager : ok=2  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-03 15:26:24.377485 | orchestrator | 2025-06-03 15:26:24.377710 | orchestrator | 2025-06-03 15:26:24.378062 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-03 15:26:24.378317 | orchestrator | Tuesday 03 June 2025 15:26:24 +0000 (0:00:51.705) 0:01:58.821 ********** 2025-06-03 15:26:24.378789 | orchestrator | =============================================================================== 2025-06-03 15:26:24.378969 | orchestrator | Pull keystone image ---------------------------------------------------- 66.95s 2025-06-03 15:26:24.379273 | orchestrator | Pull other images ------------------------------------------------------ 51.71s 2025-06-03 15:26:26.064021 | orchestrator | 2025-06-03 15:26:26 | INFO  | Trying to run play wipe-partitions in environment custom 2025-06-03 15:26:26.067026 | orchestrator | Registering Redlock._acquired_script 2025-06-03 15:26:26.067075 | orchestrator | Registering Redlock._extend_script 2025-06-03 15:26:26.067087 | orchestrator | Registering Redlock._release_script 2025-06-03 15:26:26.111905 | orchestrator | 2025-06-03 15:26:26 | INFO  | Task b21200a3-0864-4c26-91e8-c4da8e9b7354 (wipe-partitions) was prepared for execution. 2025-06-03 15:26:26.111991 | orchestrator | 2025-06-03 15:26:26 | INFO  | It takes a moment until task b21200a3-0864-4c26-91e8-c4da8e9b7354 (wipe-partitions) has been started and output is visible here. 2025-06-03 15:26:29.676361 | orchestrator | 2025-06-03 15:26:29.676504 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2025-06-03 15:26:29.677020 | orchestrator | 2025-06-03 15:26:29.677329 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2025-06-03 15:26:29.681191 | orchestrator | Tuesday 03 June 2025 15:26:29 +0000 (0:00:00.119) 0:00:00.119 ********** 2025-06-03 15:26:30.252942 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:26:30.254186 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:26:30.254315 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:26:30.254333 | orchestrator | 2025-06-03 15:26:30.255051 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2025-06-03 15:26:30.255075 | orchestrator | Tuesday 03 June 2025 15:26:30 +0000 (0:00:00.577) 0:00:00.697 ********** 2025-06-03 15:26:30.404346 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:26:30.499133 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:26:30.500812 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:26:30.500981 | orchestrator | 2025-06-03 15:26:30.501239 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2025-06-03 15:26:30.501493 | orchestrator | Tuesday 03 June 2025 15:26:30 +0000 (0:00:00.242) 0:00:00.939 ********** 2025-06-03 15:26:31.157322 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:26:31.159248 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:26:31.159991 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:26:31.160511 | orchestrator | 2025-06-03 15:26:31.161069 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2025-06-03 15:26:31.162269 | orchestrator | Tuesday 03 June 2025 15:26:31 +0000 (0:00:00.658) 0:00:01.597 ********** 2025-06-03 15:26:31.276449 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:26:31.347104 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:26:31.347958 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:26:31.349275 | orchestrator | 2025-06-03 15:26:31.352986 | orchestrator | TASK [Check device availability] *********************************************** 2025-06-03 15:26:31.353756 | orchestrator | Tuesday 03 June 2025 15:26:31 +0000 (0:00:00.193) 0:00:01.791 ********** 2025-06-03 15:26:32.436827 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-06-03 15:26:32.436983 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-06-03 15:26:32.437070 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-06-03 15:26:32.437088 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-06-03 15:26:32.437099 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-06-03 15:26:32.438816 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-06-03 15:26:32.439658 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-06-03 15:26:32.439919 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-06-03 15:26:32.440043 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-06-03 15:26:32.440460 | orchestrator | 2025-06-03 15:26:32.440635 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2025-06-03 15:26:32.442078 | orchestrator | Tuesday 03 June 2025 15:26:32 +0000 (0:00:01.087) 0:00:02.878 ********** 2025-06-03 15:26:33.770463 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2025-06-03 15:26:33.770623 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2025-06-03 15:26:33.770655 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2025-06-03 15:26:33.770676 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2025-06-03 15:26:33.770694 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2025-06-03 15:26:33.770714 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2025-06-03 15:26:33.770875 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2025-06-03 15:26:33.770896 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2025-06-03 15:26:33.770907 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2025-06-03 15:26:33.771126 | orchestrator | 2025-06-03 15:26:33.771324 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2025-06-03 15:26:33.771584 | orchestrator | Tuesday 03 June 2025 15:26:33 +0000 (0:00:01.332) 0:00:04.211 ********** 2025-06-03 15:26:35.945729 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-06-03 15:26:35.949155 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-06-03 15:26:35.949216 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-06-03 15:26:35.949334 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-06-03 15:26:35.949648 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-06-03 15:26:35.949829 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-06-03 15:26:35.950095 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-06-03 15:26:35.950357 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-06-03 15:26:35.950657 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-06-03 15:26:35.950817 | orchestrator | 2025-06-03 15:26:35.951022 | orchestrator | TASK [Reload udev rules] ******************************************************* 2025-06-03 15:26:35.951289 | orchestrator | Tuesday 03 June 2025 15:26:35 +0000 (0:00:02.178) 0:00:06.390 ********** 2025-06-03 15:26:36.562186 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:26:36.562304 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:26:36.564044 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:26:36.564475 | orchestrator | 2025-06-03 15:26:36.565557 | orchestrator | TASK [Request device events from the kernel] *********************************** 2025-06-03 15:26:36.565590 | orchestrator | Tuesday 03 June 2025 15:26:36 +0000 (0:00:00.615) 0:00:07.005 ********** 2025-06-03 15:26:37.164959 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:26:37.165044 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:26:37.165097 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:26:37.165447 | orchestrator | 2025-06-03 15:26:37.165812 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-03 15:26:37.168881 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-03 15:26:37.168897 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-03 15:26:37.168904 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-03 15:26:37.168911 | orchestrator | 2025-06-03 15:26:37.168918 | orchestrator | 2025-06-03 15:26:37 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-03 15:26:37.168928 | orchestrator | 2025-06-03 15:26:37 | INFO  | Please wait and do not abort execution. 2025-06-03 15:26:37.169348 | orchestrator | 2025-06-03 15:26:37.169910 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-03 15:26:37.170554 | orchestrator | Tuesday 03 June 2025 15:26:37 +0000 (0:00:00.603) 0:00:07.609 ********** 2025-06-03 15:26:37.171152 | orchestrator | =============================================================================== 2025-06-03 15:26:37.171622 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.18s 2025-06-03 15:26:37.172170 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.33s 2025-06-03 15:26:37.172693 | orchestrator | Check device availability ----------------------------------------------- 1.09s 2025-06-03 15:26:37.173073 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.66s 2025-06-03 15:26:37.173571 | orchestrator | Reload udev rules ------------------------------------------------------- 0.62s 2025-06-03 15:26:37.174061 | orchestrator | Request device events from the kernel ----------------------------------- 0.60s 2025-06-03 15:26:37.174571 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.58s 2025-06-03 15:26:37.174971 | orchestrator | Remove all rook related logical devices --------------------------------- 0.24s 2025-06-03 15:26:37.175420 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.19s 2025-06-03 15:26:39.016479 | orchestrator | Registering Redlock._acquired_script 2025-06-03 15:26:39.016578 | orchestrator | Registering Redlock._extend_script 2025-06-03 15:26:39.016593 | orchestrator | Registering Redlock._release_script 2025-06-03 15:26:39.091872 | orchestrator | 2025-06-03 15:26:39 | INFO  | Task dba10c47-d8f0-4875-85c1-0bd274c143d9 (facts) was prepared for execution. 2025-06-03 15:26:39.091974 | orchestrator | 2025-06-03 15:26:39 | INFO  | It takes a moment until task dba10c47-d8f0-4875-85c1-0bd274c143d9 (facts) has been started and output is visible here. 2025-06-03 15:26:43.642094 | orchestrator | 2025-06-03 15:26:43.647849 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-06-03 15:26:43.651974 | orchestrator | 2025-06-03 15:26:43.652004 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-06-03 15:26:43.652013 | orchestrator | Tuesday 03 June 2025 15:26:43 +0000 (0:00:00.267) 0:00:00.267 ********** 2025-06-03 15:26:44.770554 | orchestrator | ok: [testbed-manager] 2025-06-03 15:26:44.772217 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:26:44.773474 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:26:44.775147 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:26:44.776270 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:26:44.777657 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:26:44.778367 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:26:44.782141 | orchestrator | 2025-06-03 15:26:44.783326 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-06-03 15:26:44.783966 | orchestrator | Tuesday 03 June 2025 15:26:44 +0000 (0:00:01.128) 0:00:01.396 ********** 2025-06-03 15:26:44.893638 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:26:44.953672 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:26:45.014733 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:26:45.074592 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:26:45.130592 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:26:45.818340 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:26:45.819964 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:26:45.820923 | orchestrator | 2025-06-03 15:26:45.821885 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-06-03 15:26:45.823312 | orchestrator | 2025-06-03 15:26:45.824823 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-06-03 15:26:45.825610 | orchestrator | Tuesday 03 June 2025 15:26:45 +0000 (0:00:01.047) 0:00:02.444 ********** 2025-06-03 15:26:50.558808 | orchestrator | ok: [testbed-manager] 2025-06-03 15:26:50.558918 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:26:50.558934 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:26:50.560900 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:26:50.560933 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:26:50.560944 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:26:50.560956 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:26:50.560967 | orchestrator | 2025-06-03 15:26:50.560979 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-06-03 15:26:50.560991 | orchestrator | 2025-06-03 15:26:50.561003 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-06-03 15:26:50.561014 | orchestrator | Tuesday 03 June 2025 15:26:50 +0000 (0:00:04.740) 0:00:07.184 ********** 2025-06-03 15:26:50.680806 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:26:50.750160 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:26:50.814708 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:26:50.875583 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:26:50.935845 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:26:50.968646 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:26:50.968742 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:26:50.968756 | orchestrator | 2025-06-03 15:26:50.968769 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-03 15:26:50.968864 | orchestrator | 2025-06-03 15:26:50 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-03 15:26:50.969885 | orchestrator | 2025-06-03 15:26:50 | INFO  | Please wait and do not abort execution. 2025-06-03 15:26:50.970157 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-03 15:26:50.970328 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-03 15:26:50.970599 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-03 15:26:50.970774 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-03 15:26:50.972343 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-03 15:26:50.972386 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-03 15:26:50.972452 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-03 15:26:50.972556 | orchestrator | 2025-06-03 15:26:50.972901 | orchestrator | 2025-06-03 15:26:50.973150 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-03 15:26:50.973713 | orchestrator | Tuesday 03 June 2025 15:26:50 +0000 (0:00:00.415) 0:00:07.600 ********** 2025-06-03 15:26:50.974107 | orchestrator | =============================================================================== 2025-06-03 15:26:50.974329 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.74s 2025-06-03 15:26:50.974910 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.13s 2025-06-03 15:26:50.975012 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.05s 2025-06-03 15:26:50.975409 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.42s 2025-06-03 15:26:52.861880 | orchestrator | 2025-06-03 15:26:52 | INFO  | Task 2e3a8a2e-f8e1-466b-92ce-3336f053641c (ceph-configure-lvm-volumes) was prepared for execution. 2025-06-03 15:26:52.862105 | orchestrator | 2025-06-03 15:26:52 | INFO  | It takes a moment until task 2e3a8a2e-f8e1-466b-92ce-3336f053641c (ceph-configure-lvm-volumes) has been started and output is visible here. 2025-06-03 15:26:56.663356 | orchestrator | 2025-06-03 15:26:56.663727 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-06-03 15:26:56.664461 | orchestrator | 2025-06-03 15:26:56.664963 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-06-03 15:26:56.665251 | orchestrator | Tuesday 03 June 2025 15:26:56 +0000 (0:00:00.279) 0:00:00.279 ********** 2025-06-03 15:26:56.864372 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-03 15:26:56.864482 | orchestrator | 2025-06-03 15:26:56.864500 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-06-03 15:26:56.866376 | orchestrator | Tuesday 03 June 2025 15:26:56 +0000 (0:00:00.203) 0:00:00.483 ********** 2025-06-03 15:26:57.067433 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:26:57.068453 | orchestrator | 2025-06-03 15:26:57.068538 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:26:57.068887 | orchestrator | Tuesday 03 June 2025 15:26:57 +0000 (0:00:00.206) 0:00:00.689 ********** 2025-06-03 15:26:57.371138 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-06-03 15:26:57.374116 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-06-03 15:26:57.374958 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-06-03 15:26:57.375954 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-06-03 15:26:57.376834 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-06-03 15:26:57.377706 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-06-03 15:26:57.378646 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-06-03 15:26:57.379417 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-06-03 15:26:57.381937 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-06-03 15:26:57.382702 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-06-03 15:26:57.383178 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-06-03 15:26:57.383984 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-06-03 15:26:57.384339 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-06-03 15:26:57.385156 | orchestrator | 2025-06-03 15:26:57.385394 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:26:57.385961 | orchestrator | Tuesday 03 June 2025 15:26:57 +0000 (0:00:00.297) 0:00:00.987 ********** 2025-06-03 15:26:57.807898 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:26:57.808496 | orchestrator | 2025-06-03 15:26:57.809639 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:26:57.811121 | orchestrator | Tuesday 03 June 2025 15:26:57 +0000 (0:00:00.440) 0:00:01.428 ********** 2025-06-03 15:26:58.001036 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:26:58.002921 | orchestrator | 2025-06-03 15:26:58.005094 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:26:58.006484 | orchestrator | Tuesday 03 June 2025 15:26:57 +0000 (0:00:00.191) 0:00:01.620 ********** 2025-06-03 15:26:58.190881 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:26:58.191501 | orchestrator | 2025-06-03 15:26:58.191596 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:26:58.191674 | orchestrator | Tuesday 03 June 2025 15:26:58 +0000 (0:00:00.187) 0:00:01.808 ********** 2025-06-03 15:26:58.379294 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:26:58.380856 | orchestrator | 2025-06-03 15:26:58.381753 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:26:58.383798 | orchestrator | Tuesday 03 June 2025 15:26:58 +0000 (0:00:00.191) 0:00:01.999 ********** 2025-06-03 15:26:58.563244 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:26:58.563731 | orchestrator | 2025-06-03 15:26:58.564513 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:26:58.564893 | orchestrator | Tuesday 03 June 2025 15:26:58 +0000 (0:00:00.185) 0:00:02.184 ********** 2025-06-03 15:26:58.750338 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:26:58.750473 | orchestrator | 2025-06-03 15:26:58.753495 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:26:58.754121 | orchestrator | Tuesday 03 June 2025 15:26:58 +0000 (0:00:00.184) 0:00:02.369 ********** 2025-06-03 15:26:58.934879 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:26:58.935002 | orchestrator | 2025-06-03 15:26:58.935482 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:26:58.936097 | orchestrator | Tuesday 03 June 2025 15:26:58 +0000 (0:00:00.184) 0:00:02.554 ********** 2025-06-03 15:26:59.123526 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:26:59.123639 | orchestrator | 2025-06-03 15:26:59.123738 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:26:59.125045 | orchestrator | Tuesday 03 June 2025 15:26:59 +0000 (0:00:00.190) 0:00:02.745 ********** 2025-06-03 15:26:59.473330 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_7e1af086-74b9-4b96-b1ab-e1589a6f5143) 2025-06-03 15:26:59.474002 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_7e1af086-74b9-4b96-b1ab-e1589a6f5143) 2025-06-03 15:26:59.475893 | orchestrator | 2025-06-03 15:26:59.478344 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:26:59.478718 | orchestrator | Tuesday 03 June 2025 15:26:59 +0000 (0:00:00.349) 0:00:03.094 ********** 2025-06-03 15:26:59.849150 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_5c901d52-eede-42c5-873c-7ade3ca032e1) 2025-06-03 15:26:59.849849 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_5c901d52-eede-42c5-873c-7ade3ca032e1) 2025-06-03 15:26:59.850637 | orchestrator | 2025-06-03 15:26:59.851342 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:26:59.854333 | orchestrator | Tuesday 03 June 2025 15:26:59 +0000 (0:00:00.375) 0:00:03.470 ********** 2025-06-03 15:27:00.382264 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_b4ac7e97-dff3-4114-bb9f-c387d4fd8c04) 2025-06-03 15:27:00.382668 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_b4ac7e97-dff3-4114-bb9f-c387d4fd8c04) 2025-06-03 15:27:00.383024 | orchestrator | 2025-06-03 15:27:00.383590 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:27:00.384291 | orchestrator | Tuesday 03 June 2025 15:27:00 +0000 (0:00:00.531) 0:00:04.001 ********** 2025-06-03 15:27:00.956605 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_61b072b3-0d8d-4456-975d-55fef61370d3) 2025-06-03 15:27:00.956678 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_61b072b3-0d8d-4456-975d-55fef61370d3) 2025-06-03 15:27:00.957692 | orchestrator | 2025-06-03 15:27:00.958455 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:27:00.958972 | orchestrator | Tuesday 03 June 2025 15:27:00 +0000 (0:00:00.576) 0:00:04.578 ********** 2025-06-03 15:27:01.487327 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-06-03 15:27:01.487452 | orchestrator | 2025-06-03 15:27:01.487625 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:27:01.488887 | orchestrator | Tuesday 03 June 2025 15:27:01 +0000 (0:00:00.530) 0:00:05.108 ********** 2025-06-03 15:27:01.821724 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-06-03 15:27:01.821817 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-06-03 15:27:01.821975 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-06-03 15:27:01.822388 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-06-03 15:27:01.822964 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-06-03 15:27:01.823484 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-06-03 15:27:01.823764 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-06-03 15:27:01.824710 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-06-03 15:27:01.827188 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-06-03 15:27:01.827468 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-06-03 15:27:01.827509 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-06-03 15:27:01.827764 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-06-03 15:27:01.828245 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-06-03 15:27:01.828573 | orchestrator | 2025-06-03 15:27:01.828939 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:27:01.829761 | orchestrator | Tuesday 03 June 2025 15:27:01 +0000 (0:00:00.335) 0:00:05.443 ********** 2025-06-03 15:27:01.982654 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:27:01.983670 | orchestrator | 2025-06-03 15:27:01.984861 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:27:01.985929 | orchestrator | Tuesday 03 June 2025 15:27:01 +0000 (0:00:00.160) 0:00:05.604 ********** 2025-06-03 15:27:02.154397 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:27:02.154758 | orchestrator | 2025-06-03 15:27:02.155152 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:27:02.155640 | orchestrator | Tuesday 03 June 2025 15:27:02 +0000 (0:00:00.168) 0:00:05.772 ********** 2025-06-03 15:27:02.323777 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:27:02.324156 | orchestrator | 2025-06-03 15:27:02.324355 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:27:02.325052 | orchestrator | Tuesday 03 June 2025 15:27:02 +0000 (0:00:00.172) 0:00:05.945 ********** 2025-06-03 15:27:02.495566 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:27:02.495615 | orchestrator | 2025-06-03 15:27:02.496834 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:27:02.498452 | orchestrator | Tuesday 03 June 2025 15:27:02 +0000 (0:00:00.171) 0:00:06.116 ********** 2025-06-03 15:27:02.665886 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:27:02.667609 | orchestrator | 2025-06-03 15:27:02.667634 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:27:02.667647 | orchestrator | Tuesday 03 June 2025 15:27:02 +0000 (0:00:00.170) 0:00:06.287 ********** 2025-06-03 15:27:02.834877 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:27:02.836048 | orchestrator | 2025-06-03 15:27:02.836684 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:27:02.837883 | orchestrator | Tuesday 03 June 2025 15:27:02 +0000 (0:00:00.169) 0:00:06.456 ********** 2025-06-03 15:27:02.991683 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:27:02.991774 | orchestrator | 2025-06-03 15:27:02.991880 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:27:02.993226 | orchestrator | Tuesday 03 June 2025 15:27:02 +0000 (0:00:00.156) 0:00:06.613 ********** 2025-06-03 15:27:03.215944 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:27:03.216026 | orchestrator | 2025-06-03 15:27:03.216102 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:27:03.216249 | orchestrator | Tuesday 03 June 2025 15:27:03 +0000 (0:00:00.223) 0:00:06.837 ********** 2025-06-03 15:27:04.020792 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-06-03 15:27:04.021449 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-06-03 15:27:04.023319 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-06-03 15:27:04.023552 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-06-03 15:27:04.023797 | orchestrator | 2025-06-03 15:27:04.024455 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:27:04.025791 | orchestrator | Tuesday 03 June 2025 15:27:04 +0000 (0:00:00.802) 0:00:07.640 ********** 2025-06-03 15:27:04.199991 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:27:04.200693 | orchestrator | 2025-06-03 15:27:04.200747 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:27:04.201330 | orchestrator | Tuesday 03 June 2025 15:27:04 +0000 (0:00:00.180) 0:00:07.821 ********** 2025-06-03 15:27:04.393742 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:27:04.394359 | orchestrator | 2025-06-03 15:27:04.395225 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:27:04.395966 | orchestrator | Tuesday 03 June 2025 15:27:04 +0000 (0:00:00.193) 0:00:08.014 ********** 2025-06-03 15:27:04.592815 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:27:04.593631 | orchestrator | 2025-06-03 15:27:04.594295 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:27:04.595735 | orchestrator | Tuesday 03 June 2025 15:27:04 +0000 (0:00:00.196) 0:00:08.211 ********** 2025-06-03 15:27:04.787204 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:27:04.787578 | orchestrator | 2025-06-03 15:27:04.788815 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-06-03 15:27:04.789222 | orchestrator | Tuesday 03 June 2025 15:27:04 +0000 (0:00:00.195) 0:00:08.407 ********** 2025-06-03 15:27:04.982231 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2025-06-03 15:27:04.983134 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2025-06-03 15:27:04.984728 | orchestrator | 2025-06-03 15:27:04.985620 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-06-03 15:27:04.986096 | orchestrator | Tuesday 03 June 2025 15:27:04 +0000 (0:00:00.195) 0:00:08.603 ********** 2025-06-03 15:27:05.116059 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:27:05.116451 | orchestrator | 2025-06-03 15:27:05.118380 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-06-03 15:27:05.119270 | orchestrator | Tuesday 03 June 2025 15:27:05 +0000 (0:00:00.133) 0:00:08.736 ********** 2025-06-03 15:27:05.238074 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:27:05.238322 | orchestrator | 2025-06-03 15:27:05.238674 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-06-03 15:27:05.239207 | orchestrator | Tuesday 03 June 2025 15:27:05 +0000 (0:00:00.122) 0:00:08.859 ********** 2025-06-03 15:27:05.366873 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:27:05.367659 | orchestrator | 2025-06-03 15:27:05.368008 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-06-03 15:27:05.370319 | orchestrator | Tuesday 03 June 2025 15:27:05 +0000 (0:00:00.127) 0:00:08.986 ********** 2025-06-03 15:27:05.504401 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:27:05.506758 | orchestrator | 2025-06-03 15:27:05.507052 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-06-03 15:27:05.508100 | orchestrator | Tuesday 03 June 2025 15:27:05 +0000 (0:00:00.136) 0:00:09.123 ********** 2025-06-03 15:27:05.670779 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '5a262827-4eba-5d37-ab06-09e1d49a835c'}}) 2025-06-03 15:27:05.672021 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'd47078ac-4564-569b-bfa7-6d988d420f95'}}) 2025-06-03 15:27:05.673163 | orchestrator | 2025-06-03 15:27:05.674066 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-06-03 15:27:05.674802 | orchestrator | Tuesday 03 June 2025 15:27:05 +0000 (0:00:00.167) 0:00:09.290 ********** 2025-06-03 15:27:05.805328 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '5a262827-4eba-5d37-ab06-09e1d49a835c'}})  2025-06-03 15:27:05.805750 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'd47078ac-4564-569b-bfa7-6d988d420f95'}})  2025-06-03 15:27:05.806802 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:27:05.808349 | orchestrator | 2025-06-03 15:27:05.809030 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-06-03 15:27:05.810428 | orchestrator | Tuesday 03 June 2025 15:27:05 +0000 (0:00:00.133) 0:00:09.424 ********** 2025-06-03 15:27:05.946253 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '5a262827-4eba-5d37-ab06-09e1d49a835c'}})  2025-06-03 15:27:05.946352 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'd47078ac-4564-569b-bfa7-6d988d420f95'}})  2025-06-03 15:27:05.946914 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:27:05.947855 | orchestrator | 2025-06-03 15:27:05.950193 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-06-03 15:27:05.950236 | orchestrator | Tuesday 03 June 2025 15:27:05 +0000 (0:00:00.141) 0:00:09.566 ********** 2025-06-03 15:27:06.372829 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '5a262827-4eba-5d37-ab06-09e1d49a835c'}})  2025-06-03 15:27:06.375181 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'd47078ac-4564-569b-bfa7-6d988d420f95'}})  2025-06-03 15:27:06.376203 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:27:06.377069 | orchestrator | 2025-06-03 15:27:06.377784 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-06-03 15:27:06.378695 | orchestrator | Tuesday 03 June 2025 15:27:06 +0000 (0:00:00.426) 0:00:09.992 ********** 2025-06-03 15:27:06.514931 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:27:06.515028 | orchestrator | 2025-06-03 15:27:06.515393 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-06-03 15:27:06.517459 | orchestrator | Tuesday 03 June 2025 15:27:06 +0000 (0:00:00.141) 0:00:10.134 ********** 2025-06-03 15:27:06.659640 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:27:06.659861 | orchestrator | 2025-06-03 15:27:06.660657 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-06-03 15:27:06.661401 | orchestrator | Tuesday 03 June 2025 15:27:06 +0000 (0:00:00.145) 0:00:10.279 ********** 2025-06-03 15:27:06.789877 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:27:06.791076 | orchestrator | 2025-06-03 15:27:06.791455 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-06-03 15:27:06.792767 | orchestrator | Tuesday 03 June 2025 15:27:06 +0000 (0:00:00.128) 0:00:10.408 ********** 2025-06-03 15:27:06.914619 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:27:06.914706 | orchestrator | 2025-06-03 15:27:06.914762 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-06-03 15:27:06.914770 | orchestrator | Tuesday 03 June 2025 15:27:06 +0000 (0:00:00.126) 0:00:10.535 ********** 2025-06-03 15:27:07.056291 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:27:07.057356 | orchestrator | 2025-06-03 15:27:07.058122 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-06-03 15:27:07.059162 | orchestrator | Tuesday 03 June 2025 15:27:07 +0000 (0:00:00.141) 0:00:10.676 ********** 2025-06-03 15:27:07.196101 | orchestrator | ok: [testbed-node-3] => { 2025-06-03 15:27:07.196239 | orchestrator |  "ceph_osd_devices": { 2025-06-03 15:27:07.198911 | orchestrator |  "sdb": { 2025-06-03 15:27:07.199590 | orchestrator |  "osd_lvm_uuid": "5a262827-4eba-5d37-ab06-09e1d49a835c" 2025-06-03 15:27:07.200382 | orchestrator |  }, 2025-06-03 15:27:07.202953 | orchestrator |  "sdc": { 2025-06-03 15:27:07.203173 | orchestrator |  "osd_lvm_uuid": "d47078ac-4564-569b-bfa7-6d988d420f95" 2025-06-03 15:27:07.203647 | orchestrator |  } 2025-06-03 15:27:07.204532 | orchestrator |  } 2025-06-03 15:27:07.205785 | orchestrator | } 2025-06-03 15:27:07.205885 | orchestrator | 2025-06-03 15:27:07.205983 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-06-03 15:27:07.206358 | orchestrator | Tuesday 03 June 2025 15:27:07 +0000 (0:00:00.139) 0:00:10.816 ********** 2025-06-03 15:27:07.339384 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:27:07.339550 | orchestrator | 2025-06-03 15:27:07.339565 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-06-03 15:27:07.341369 | orchestrator | Tuesday 03 June 2025 15:27:07 +0000 (0:00:00.143) 0:00:10.959 ********** 2025-06-03 15:27:07.470530 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:27:07.470632 | orchestrator | 2025-06-03 15:27:07.470652 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-06-03 15:27:07.470774 | orchestrator | Tuesday 03 June 2025 15:27:07 +0000 (0:00:00.132) 0:00:11.091 ********** 2025-06-03 15:27:07.613192 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:27:07.613656 | orchestrator | 2025-06-03 15:27:07.614631 | orchestrator | TASK [Print configuration data] ************************************************ 2025-06-03 15:27:07.615260 | orchestrator | Tuesday 03 June 2025 15:27:07 +0000 (0:00:00.141) 0:00:11.232 ********** 2025-06-03 15:27:07.837752 | orchestrator | changed: [testbed-node-3] => { 2025-06-03 15:27:07.838209 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-06-03 15:27:07.838241 | orchestrator |  "ceph_osd_devices": { 2025-06-03 15:27:07.838258 | orchestrator |  "sdb": { 2025-06-03 15:27:07.839918 | orchestrator |  "osd_lvm_uuid": "5a262827-4eba-5d37-ab06-09e1d49a835c" 2025-06-03 15:27:07.839947 | orchestrator |  }, 2025-06-03 15:27:07.839997 | orchestrator |  "sdc": { 2025-06-03 15:27:07.841051 | orchestrator |  "osd_lvm_uuid": "d47078ac-4564-569b-bfa7-6d988d420f95" 2025-06-03 15:27:07.841542 | orchestrator |  } 2025-06-03 15:27:07.842000 | orchestrator |  }, 2025-06-03 15:27:07.842538 | orchestrator |  "lvm_volumes": [ 2025-06-03 15:27:07.842942 | orchestrator |  { 2025-06-03 15:27:07.843709 | orchestrator |  "data": "osd-block-5a262827-4eba-5d37-ab06-09e1d49a835c", 2025-06-03 15:27:07.843922 | orchestrator |  "data_vg": "ceph-5a262827-4eba-5d37-ab06-09e1d49a835c" 2025-06-03 15:27:07.844454 | orchestrator |  }, 2025-06-03 15:27:07.844923 | orchestrator |  { 2025-06-03 15:27:07.846559 | orchestrator |  "data": "osd-block-d47078ac-4564-569b-bfa7-6d988d420f95", 2025-06-03 15:27:07.846706 | orchestrator |  "data_vg": "ceph-d47078ac-4564-569b-bfa7-6d988d420f95" 2025-06-03 15:27:07.847231 | orchestrator |  } 2025-06-03 15:27:07.847577 | orchestrator |  ] 2025-06-03 15:27:07.850642 | orchestrator |  } 2025-06-03 15:27:07.851307 | orchestrator | } 2025-06-03 15:27:07.851860 | orchestrator | 2025-06-03 15:27:07.854322 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-06-03 15:27:07.856655 | orchestrator | Tuesday 03 June 2025 15:27:07 +0000 (0:00:00.226) 0:00:11.458 ********** 2025-06-03 15:27:10.105532 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-03 15:27:10.106006 | orchestrator | 2025-06-03 15:27:10.108343 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-06-03 15:27:10.109469 | orchestrator | 2025-06-03 15:27:10.110550 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-06-03 15:27:10.110990 | orchestrator | Tuesday 03 June 2025 15:27:10 +0000 (0:00:02.261) 0:00:13.720 ********** 2025-06-03 15:27:10.328226 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-06-03 15:27:10.328767 | orchestrator | 2025-06-03 15:27:10.331130 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-06-03 15:27:10.332147 | orchestrator | Tuesday 03 June 2025 15:27:10 +0000 (0:00:00.226) 0:00:13.946 ********** 2025-06-03 15:27:10.558416 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:27:10.558471 | orchestrator | 2025-06-03 15:27:10.558477 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:27:10.558482 | orchestrator | Tuesday 03 June 2025 15:27:10 +0000 (0:00:00.231) 0:00:14.178 ********** 2025-06-03 15:27:10.916140 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-06-03 15:27:10.916391 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-06-03 15:27:10.918105 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-06-03 15:27:10.918578 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-06-03 15:27:10.919273 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-06-03 15:27:10.919878 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-06-03 15:27:10.924611 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-06-03 15:27:10.924818 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-06-03 15:27:10.925128 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-06-03 15:27:10.925462 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-06-03 15:27:10.926152 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-06-03 15:27:10.926852 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-06-03 15:27:10.926867 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-06-03 15:27:10.927151 | orchestrator | 2025-06-03 15:27:10.927456 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:27:10.929010 | orchestrator | Tuesday 03 June 2025 15:27:10 +0000 (0:00:00.358) 0:00:14.536 ********** 2025-06-03 15:27:11.115653 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:27:11.116221 | orchestrator | 2025-06-03 15:27:11.116601 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:27:11.116995 | orchestrator | Tuesday 03 June 2025 15:27:11 +0000 (0:00:00.199) 0:00:14.736 ********** 2025-06-03 15:27:11.312197 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:27:11.316172 | orchestrator | 2025-06-03 15:27:11.316262 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:27:11.316449 | orchestrator | Tuesday 03 June 2025 15:27:11 +0000 (0:00:00.198) 0:00:14.934 ********** 2025-06-03 15:27:11.499554 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:27:11.499717 | orchestrator | 2025-06-03 15:27:11.499886 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:27:11.500120 | orchestrator | Tuesday 03 June 2025 15:27:11 +0000 (0:00:00.183) 0:00:15.117 ********** 2025-06-03 15:27:11.675339 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:27:11.675451 | orchestrator | 2025-06-03 15:27:11.675513 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:27:11.676715 | orchestrator | Tuesday 03 June 2025 15:27:11 +0000 (0:00:00.177) 0:00:15.295 ********** 2025-06-03 15:27:11.862333 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:27:11.862446 | orchestrator | 2025-06-03 15:27:11.862513 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:27:11.862525 | orchestrator | Tuesday 03 June 2025 15:27:11 +0000 (0:00:00.187) 0:00:15.483 ********** 2025-06-03 15:27:12.485447 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:27:12.486597 | orchestrator | 2025-06-03 15:27:12.486944 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:27:12.487523 | orchestrator | Tuesday 03 June 2025 15:27:12 +0000 (0:00:00.623) 0:00:16.107 ********** 2025-06-03 15:27:12.691024 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:27:12.691187 | orchestrator | 2025-06-03 15:27:12.691399 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:27:12.691788 | orchestrator | Tuesday 03 June 2025 15:27:12 +0000 (0:00:00.205) 0:00:16.312 ********** 2025-06-03 15:27:12.911483 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:27:12.914246 | orchestrator | 2025-06-03 15:27:12.914559 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:27:12.915107 | orchestrator | Tuesday 03 June 2025 15:27:12 +0000 (0:00:00.218) 0:00:16.531 ********** 2025-06-03 15:27:13.336553 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_f6db3371-ad49-4dd9-a193-0ba30b3292ba) 2025-06-03 15:27:13.338321 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_f6db3371-ad49-4dd9-a193-0ba30b3292ba) 2025-06-03 15:27:13.340705 | orchestrator | 2025-06-03 15:27:13.342218 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:27:13.343891 | orchestrator | Tuesday 03 June 2025 15:27:13 +0000 (0:00:00.425) 0:00:16.957 ********** 2025-06-03 15:27:13.740348 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_88cf38eb-fdbf-404b-9f1d-cd32f6bedf4b) 2025-06-03 15:27:13.741047 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_88cf38eb-fdbf-404b-9f1d-cd32f6bedf4b) 2025-06-03 15:27:13.742314 | orchestrator | 2025-06-03 15:27:13.744206 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:27:13.747905 | orchestrator | Tuesday 03 June 2025 15:27:13 +0000 (0:00:00.402) 0:00:17.360 ********** 2025-06-03 15:27:14.189057 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_35e8ec34-b9aa-4705-9105-50464be240ba) 2025-06-03 15:27:14.189154 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_35e8ec34-b9aa-4705-9105-50464be240ba) 2025-06-03 15:27:14.192113 | orchestrator | 2025-06-03 15:27:14.192830 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:27:14.197106 | orchestrator | Tuesday 03 June 2025 15:27:14 +0000 (0:00:00.448) 0:00:17.808 ********** 2025-06-03 15:27:14.651232 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_b1c5376b-f7c7-4aac-a0b2-3df8be7d9631) 2025-06-03 15:27:14.653169 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_b1c5376b-f7c7-4aac-a0b2-3df8be7d9631) 2025-06-03 15:27:14.657199 | orchestrator | 2025-06-03 15:27:14.659630 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:27:14.660593 | orchestrator | Tuesday 03 June 2025 15:27:14 +0000 (0:00:00.462) 0:00:18.271 ********** 2025-06-03 15:27:14.994844 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-06-03 15:27:14.995681 | orchestrator | 2025-06-03 15:27:14.998221 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:27:14.998542 | orchestrator | Tuesday 03 June 2025 15:27:14 +0000 (0:00:00.341) 0:00:18.613 ********** 2025-06-03 15:27:15.384760 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-06-03 15:27:15.384899 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-06-03 15:27:15.384989 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-06-03 15:27:15.385336 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-06-03 15:27:15.385691 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-06-03 15:27:15.386117 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-06-03 15:27:15.386673 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-06-03 15:27:15.386982 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-06-03 15:27:15.387516 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-06-03 15:27:15.388169 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-06-03 15:27:15.388594 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-06-03 15:27:15.389074 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-06-03 15:27:15.389508 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-06-03 15:27:15.389957 | orchestrator | 2025-06-03 15:27:15.390659 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:27:15.390991 | orchestrator | Tuesday 03 June 2025 15:27:15 +0000 (0:00:00.387) 0:00:19.000 ********** 2025-06-03 15:27:15.609655 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:27:15.609866 | orchestrator | 2025-06-03 15:27:15.610346 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:27:15.610939 | orchestrator | Tuesday 03 June 2025 15:27:15 +0000 (0:00:00.230) 0:00:19.230 ********** 2025-06-03 15:27:16.295399 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:27:16.299311 | orchestrator | 2025-06-03 15:27:16.299348 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:27:16.299357 | orchestrator | Tuesday 03 June 2025 15:27:16 +0000 (0:00:00.681) 0:00:19.912 ********** 2025-06-03 15:27:16.515945 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:27:16.516495 | orchestrator | 2025-06-03 15:27:16.520343 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:27:16.520592 | orchestrator | Tuesday 03 June 2025 15:27:16 +0000 (0:00:00.224) 0:00:20.136 ********** 2025-06-03 15:27:16.713819 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:27:16.714809 | orchestrator | 2025-06-03 15:27:16.716240 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:27:16.719434 | orchestrator | Tuesday 03 June 2025 15:27:16 +0000 (0:00:00.198) 0:00:20.335 ********** 2025-06-03 15:27:16.893032 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:27:16.894564 | orchestrator | 2025-06-03 15:27:16.896709 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:27:16.896725 | orchestrator | Tuesday 03 June 2025 15:27:16 +0000 (0:00:00.177) 0:00:20.513 ********** 2025-06-03 15:27:17.070621 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:27:17.072010 | orchestrator | 2025-06-03 15:27:17.073725 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:27:17.073739 | orchestrator | Tuesday 03 June 2025 15:27:17 +0000 (0:00:00.178) 0:00:20.691 ********** 2025-06-03 15:27:17.240182 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:27:17.241468 | orchestrator | 2025-06-03 15:27:17.242069 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:27:17.242302 | orchestrator | Tuesday 03 June 2025 15:27:17 +0000 (0:00:00.168) 0:00:20.859 ********** 2025-06-03 15:27:17.444817 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:27:17.446514 | orchestrator | 2025-06-03 15:27:17.447221 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:27:17.447894 | orchestrator | Tuesday 03 June 2025 15:27:17 +0000 (0:00:00.204) 0:00:21.064 ********** 2025-06-03 15:27:18.056794 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-06-03 15:27:18.057233 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-06-03 15:27:18.057969 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-06-03 15:27:18.058608 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-06-03 15:27:18.059317 | orchestrator | 2025-06-03 15:27:18.059890 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:27:18.060511 | orchestrator | Tuesday 03 June 2025 15:27:18 +0000 (0:00:00.612) 0:00:21.677 ********** 2025-06-03 15:27:18.270858 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:27:18.271553 | orchestrator | 2025-06-03 15:27:18.272519 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:27:18.273880 | orchestrator | Tuesday 03 June 2025 15:27:18 +0000 (0:00:00.213) 0:00:21.891 ********** 2025-06-03 15:27:18.463163 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:27:18.463464 | orchestrator | 2025-06-03 15:27:18.464943 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:27:18.465187 | orchestrator | Tuesday 03 June 2025 15:27:18 +0000 (0:00:00.193) 0:00:22.084 ********** 2025-06-03 15:27:18.645774 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:27:18.647371 | orchestrator | 2025-06-03 15:27:18.648753 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:27:18.649868 | orchestrator | Tuesday 03 June 2025 15:27:18 +0000 (0:00:00.182) 0:00:22.266 ********** 2025-06-03 15:27:18.853097 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:27:18.853229 | orchestrator | 2025-06-03 15:27:18.853379 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-06-03 15:27:18.853642 | orchestrator | Tuesday 03 June 2025 15:27:18 +0000 (0:00:00.207) 0:00:22.474 ********** 2025-06-03 15:27:19.174366 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2025-06-03 15:27:19.175371 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2025-06-03 15:27:19.177482 | orchestrator | 2025-06-03 15:27:19.177773 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-06-03 15:27:19.178944 | orchestrator | Tuesday 03 June 2025 15:27:19 +0000 (0:00:00.320) 0:00:22.795 ********** 2025-06-03 15:27:19.291453 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:27:19.291574 | orchestrator | 2025-06-03 15:27:19.292119 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-06-03 15:27:19.292855 | orchestrator | Tuesday 03 June 2025 15:27:19 +0000 (0:00:00.115) 0:00:22.911 ********** 2025-06-03 15:27:19.413727 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:27:19.413902 | orchestrator | 2025-06-03 15:27:19.414674 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-06-03 15:27:19.415141 | orchestrator | Tuesday 03 June 2025 15:27:19 +0000 (0:00:00.123) 0:00:23.034 ********** 2025-06-03 15:27:19.554814 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:27:19.555169 | orchestrator | 2025-06-03 15:27:19.555458 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-06-03 15:27:19.556440 | orchestrator | Tuesday 03 June 2025 15:27:19 +0000 (0:00:00.141) 0:00:23.176 ********** 2025-06-03 15:27:19.685822 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:27:19.686204 | orchestrator | 2025-06-03 15:27:19.686400 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-06-03 15:27:19.688080 | orchestrator | Tuesday 03 June 2025 15:27:19 +0000 (0:00:00.130) 0:00:23.307 ********** 2025-06-03 15:27:19.829368 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'f00e4ac9-9831-582f-92bc-f2b318630797'}}) 2025-06-03 15:27:19.829779 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '2547461e-5dcb-5046-b3ed-0a182c83d3a8'}}) 2025-06-03 15:27:19.830144 | orchestrator | 2025-06-03 15:27:19.831318 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-06-03 15:27:19.831930 | orchestrator | Tuesday 03 June 2025 15:27:19 +0000 (0:00:00.142) 0:00:23.450 ********** 2025-06-03 15:27:19.979609 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'f00e4ac9-9831-582f-92bc-f2b318630797'}})  2025-06-03 15:27:19.980656 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '2547461e-5dcb-5046-b3ed-0a182c83d3a8'}})  2025-06-03 15:27:19.981937 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:27:19.983749 | orchestrator | 2025-06-03 15:27:19.984094 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-06-03 15:27:19.985123 | orchestrator | Tuesday 03 June 2025 15:27:19 +0000 (0:00:00.149) 0:00:23.600 ********** 2025-06-03 15:27:20.131229 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'f00e4ac9-9831-582f-92bc-f2b318630797'}})  2025-06-03 15:27:20.131344 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '2547461e-5dcb-5046-b3ed-0a182c83d3a8'}})  2025-06-03 15:27:20.132007 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:27:20.132691 | orchestrator | 2025-06-03 15:27:20.133889 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-06-03 15:27:20.134852 | orchestrator | Tuesday 03 June 2025 15:27:20 +0000 (0:00:00.151) 0:00:23.751 ********** 2025-06-03 15:27:20.283745 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'f00e4ac9-9831-582f-92bc-f2b318630797'}})  2025-06-03 15:27:20.289077 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '2547461e-5dcb-5046-b3ed-0a182c83d3a8'}})  2025-06-03 15:27:20.289140 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:27:20.289168 | orchestrator | 2025-06-03 15:27:20.290684 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-06-03 15:27:20.291443 | orchestrator | Tuesday 03 June 2025 15:27:20 +0000 (0:00:00.151) 0:00:23.903 ********** 2025-06-03 15:27:20.476446 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:27:20.477382 | orchestrator | 2025-06-03 15:27:20.478187 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-06-03 15:27:20.479238 | orchestrator | Tuesday 03 June 2025 15:27:20 +0000 (0:00:00.193) 0:00:24.096 ********** 2025-06-03 15:27:20.642680 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:27:20.645009 | orchestrator | 2025-06-03 15:27:20.645865 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-06-03 15:27:20.646826 | orchestrator | Tuesday 03 June 2025 15:27:20 +0000 (0:00:00.166) 0:00:24.263 ********** 2025-06-03 15:27:20.776509 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:27:20.777279 | orchestrator | 2025-06-03 15:27:20.777581 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-06-03 15:27:20.777868 | orchestrator | Tuesday 03 June 2025 15:27:20 +0000 (0:00:00.133) 0:00:24.396 ********** 2025-06-03 15:27:21.144385 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:27:21.145399 | orchestrator | 2025-06-03 15:27:21.147663 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-06-03 15:27:21.147822 | orchestrator | Tuesday 03 June 2025 15:27:21 +0000 (0:00:00.367) 0:00:24.764 ********** 2025-06-03 15:27:21.277804 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:27:21.278527 | orchestrator | 2025-06-03 15:27:21.279657 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-06-03 15:27:21.279689 | orchestrator | Tuesday 03 June 2025 15:27:21 +0000 (0:00:00.134) 0:00:24.898 ********** 2025-06-03 15:27:21.413096 | orchestrator | ok: [testbed-node-4] => { 2025-06-03 15:27:21.413585 | orchestrator |  "ceph_osd_devices": { 2025-06-03 15:27:21.414272 | orchestrator |  "sdb": { 2025-06-03 15:27:21.415570 | orchestrator |  "osd_lvm_uuid": "f00e4ac9-9831-582f-92bc-f2b318630797" 2025-06-03 15:27:21.415806 | orchestrator |  }, 2025-06-03 15:27:21.416735 | orchestrator |  "sdc": { 2025-06-03 15:27:21.417589 | orchestrator |  "osd_lvm_uuid": "2547461e-5dcb-5046-b3ed-0a182c83d3a8" 2025-06-03 15:27:21.418175 | orchestrator |  } 2025-06-03 15:27:21.418801 | orchestrator |  } 2025-06-03 15:27:21.419200 | orchestrator | } 2025-06-03 15:27:21.419700 | orchestrator | 2025-06-03 15:27:21.420138 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-06-03 15:27:21.420786 | orchestrator | Tuesday 03 June 2025 15:27:21 +0000 (0:00:00.134) 0:00:25.033 ********** 2025-06-03 15:27:21.553124 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:27:21.554527 | orchestrator | 2025-06-03 15:27:21.555975 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-06-03 15:27:21.557148 | orchestrator | Tuesday 03 June 2025 15:27:21 +0000 (0:00:00.140) 0:00:25.173 ********** 2025-06-03 15:27:21.689360 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:27:21.689485 | orchestrator | 2025-06-03 15:27:21.691099 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-06-03 15:27:21.691160 | orchestrator | Tuesday 03 June 2025 15:27:21 +0000 (0:00:00.133) 0:00:25.307 ********** 2025-06-03 15:27:21.817966 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:27:21.818115 | orchestrator | 2025-06-03 15:27:21.819064 | orchestrator | TASK [Print configuration data] ************************************************ 2025-06-03 15:27:21.819097 | orchestrator | Tuesday 03 June 2025 15:27:21 +0000 (0:00:00.132) 0:00:25.439 ********** 2025-06-03 15:27:22.019571 | orchestrator | changed: [testbed-node-4] => { 2025-06-03 15:27:22.020835 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-06-03 15:27:22.023023 | orchestrator |  "ceph_osd_devices": { 2025-06-03 15:27:22.023991 | orchestrator |  "sdb": { 2025-06-03 15:27:22.025952 | orchestrator |  "osd_lvm_uuid": "f00e4ac9-9831-582f-92bc-f2b318630797" 2025-06-03 15:27:22.027467 | orchestrator |  }, 2025-06-03 15:27:22.027955 | orchestrator |  "sdc": { 2025-06-03 15:27:22.028683 | orchestrator |  "osd_lvm_uuid": "2547461e-5dcb-5046-b3ed-0a182c83d3a8" 2025-06-03 15:27:22.029375 | orchestrator |  } 2025-06-03 15:27:22.030331 | orchestrator |  }, 2025-06-03 15:27:22.031145 | orchestrator |  "lvm_volumes": [ 2025-06-03 15:27:22.032066 | orchestrator |  { 2025-06-03 15:27:22.032413 | orchestrator |  "data": "osd-block-f00e4ac9-9831-582f-92bc-f2b318630797", 2025-06-03 15:27:22.033180 | orchestrator |  "data_vg": "ceph-f00e4ac9-9831-582f-92bc-f2b318630797" 2025-06-03 15:27:22.033482 | orchestrator |  }, 2025-06-03 15:27:22.033910 | orchestrator |  { 2025-06-03 15:27:22.034556 | orchestrator |  "data": "osd-block-2547461e-5dcb-5046-b3ed-0a182c83d3a8", 2025-06-03 15:27:22.035016 | orchestrator |  "data_vg": "ceph-2547461e-5dcb-5046-b3ed-0a182c83d3a8" 2025-06-03 15:27:22.036906 | orchestrator |  } 2025-06-03 15:27:22.036929 | orchestrator |  ] 2025-06-03 15:27:22.036941 | orchestrator |  } 2025-06-03 15:27:22.036953 | orchestrator | } 2025-06-03 15:27:22.037202 | orchestrator | 2025-06-03 15:27:22.037636 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-06-03 15:27:22.037904 | orchestrator | Tuesday 03 June 2025 15:27:22 +0000 (0:00:00.200) 0:00:25.640 ********** 2025-06-03 15:27:23.142880 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-06-03 15:27:23.143259 | orchestrator | 2025-06-03 15:27:23.144033 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-06-03 15:27:23.144410 | orchestrator | 2025-06-03 15:27:23.145344 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-06-03 15:27:23.145817 | orchestrator | Tuesday 03 June 2025 15:27:23 +0000 (0:00:01.122) 0:00:26.762 ********** 2025-06-03 15:27:23.616185 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-06-03 15:27:23.616599 | orchestrator | 2025-06-03 15:27:23.618365 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-06-03 15:27:23.619372 | orchestrator | Tuesday 03 June 2025 15:27:23 +0000 (0:00:00.472) 0:00:27.235 ********** 2025-06-03 15:27:24.408619 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:27:24.408885 | orchestrator | 2025-06-03 15:27:24.411158 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:27:24.411193 | orchestrator | Tuesday 03 June 2025 15:27:24 +0000 (0:00:00.792) 0:00:28.027 ********** 2025-06-03 15:27:24.805210 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-06-03 15:27:24.807174 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-06-03 15:27:24.807278 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-06-03 15:27:24.808506 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-06-03 15:27:24.809988 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-06-03 15:27:24.810602 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-06-03 15:27:24.811823 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-06-03 15:27:24.812465 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-06-03 15:27:24.813150 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-06-03 15:27:24.814388 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-06-03 15:27:24.815005 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-06-03 15:27:24.815515 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-06-03 15:27:24.816821 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-06-03 15:27:24.817450 | orchestrator | 2025-06-03 15:27:24.817803 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:27:24.818533 | orchestrator | Tuesday 03 June 2025 15:27:24 +0000 (0:00:00.396) 0:00:28.423 ********** 2025-06-03 15:27:25.014386 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:27:25.014569 | orchestrator | 2025-06-03 15:27:25.014742 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:27:25.015480 | orchestrator | Tuesday 03 June 2025 15:27:25 +0000 (0:00:00.210) 0:00:28.634 ********** 2025-06-03 15:27:25.248985 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:27:25.252034 | orchestrator | 2025-06-03 15:27:25.252118 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:27:25.252135 | orchestrator | Tuesday 03 June 2025 15:27:25 +0000 (0:00:00.235) 0:00:28.869 ********** 2025-06-03 15:27:25.467077 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:27:25.467392 | orchestrator | 2025-06-03 15:27:25.468285 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:27:25.469121 | orchestrator | Tuesday 03 June 2025 15:27:25 +0000 (0:00:00.217) 0:00:29.086 ********** 2025-06-03 15:27:25.672289 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:27:25.672408 | orchestrator | 2025-06-03 15:27:25.672893 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:27:25.673232 | orchestrator | Tuesday 03 June 2025 15:27:25 +0000 (0:00:00.205) 0:00:29.292 ********** 2025-06-03 15:27:25.861136 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:27:25.862487 | orchestrator | 2025-06-03 15:27:25.863479 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:27:25.864316 | orchestrator | Tuesday 03 June 2025 15:27:25 +0000 (0:00:00.189) 0:00:29.481 ********** 2025-06-03 15:27:26.035874 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:27:26.036095 | orchestrator | 2025-06-03 15:27:26.036117 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:27:26.037060 | orchestrator | Tuesday 03 June 2025 15:27:26 +0000 (0:00:00.174) 0:00:29.656 ********** 2025-06-03 15:27:26.256319 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:27:26.257199 | orchestrator | 2025-06-03 15:27:26.257300 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:27:26.257697 | orchestrator | Tuesday 03 June 2025 15:27:26 +0000 (0:00:00.220) 0:00:29.876 ********** 2025-06-03 15:27:26.464647 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:27:26.464725 | orchestrator | 2025-06-03 15:27:26.465282 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:27:26.467415 | orchestrator | Tuesday 03 June 2025 15:27:26 +0000 (0:00:00.207) 0:00:30.084 ********** 2025-06-03 15:27:27.072499 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_ec1efc19-1b1e-4f39-8db8-97e27f5004aa) 2025-06-03 15:27:27.072818 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_ec1efc19-1b1e-4f39-8db8-97e27f5004aa) 2025-06-03 15:27:27.073854 | orchestrator | 2025-06-03 15:27:27.074241 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:27:27.075048 | orchestrator | Tuesday 03 June 2025 15:27:27 +0000 (0:00:00.608) 0:00:30.692 ********** 2025-06-03 15:27:27.870839 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_fa411336-a154-4770-b6c1-ce8fec2c95f2) 2025-06-03 15:27:27.871052 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_fa411336-a154-4770-b6c1-ce8fec2c95f2) 2025-06-03 15:27:27.871808 | orchestrator | 2025-06-03 15:27:27.872399 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:27:27.872909 | orchestrator | Tuesday 03 June 2025 15:27:27 +0000 (0:00:00.799) 0:00:31.492 ********** 2025-06-03 15:27:28.299129 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_ffe2a0ca-5a38-47a9-803d-00b473435346) 2025-06-03 15:27:28.300254 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_ffe2a0ca-5a38-47a9-803d-00b473435346) 2025-06-03 15:27:28.300782 | orchestrator | 2025-06-03 15:27:28.302967 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:27:28.303003 | orchestrator | Tuesday 03 June 2025 15:27:28 +0000 (0:00:00.426) 0:00:31.918 ********** 2025-06-03 15:27:28.710794 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_ed092372-9559-4d48-8a48-c44bdb9ee908) 2025-06-03 15:27:28.712460 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_ed092372-9559-4d48-8a48-c44bdb9ee908) 2025-06-03 15:27:28.713659 | orchestrator | 2025-06-03 15:27:28.714960 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:27:28.715416 | orchestrator | Tuesday 03 June 2025 15:27:28 +0000 (0:00:00.410) 0:00:32.329 ********** 2025-06-03 15:27:29.022184 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-06-03 15:27:29.023368 | orchestrator | 2025-06-03 15:27:29.024219 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:27:29.025174 | orchestrator | Tuesday 03 June 2025 15:27:29 +0000 (0:00:00.311) 0:00:32.641 ********** 2025-06-03 15:27:29.404318 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-06-03 15:27:29.404419 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-06-03 15:27:29.404456 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-06-03 15:27:29.404461 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-06-03 15:27:29.405949 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-06-03 15:27:29.407138 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-06-03 15:27:29.408114 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-06-03 15:27:29.410082 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-06-03 15:27:29.410100 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-06-03 15:27:29.410411 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-06-03 15:27:29.411188 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-06-03 15:27:29.411923 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-06-03 15:27:29.413002 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-06-03 15:27:29.413738 | orchestrator | 2025-06-03 15:27:29.414769 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:27:29.415447 | orchestrator | Tuesday 03 June 2025 15:27:29 +0000 (0:00:00.382) 0:00:33.024 ********** 2025-06-03 15:27:29.612702 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:27:29.612799 | orchestrator | 2025-06-03 15:27:29.613638 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:27:29.614960 | orchestrator | Tuesday 03 June 2025 15:27:29 +0000 (0:00:00.209) 0:00:33.233 ********** 2025-06-03 15:27:29.819597 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:27:29.819903 | orchestrator | 2025-06-03 15:27:29.821134 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:27:29.821848 | orchestrator | Tuesday 03 June 2025 15:27:29 +0000 (0:00:00.205) 0:00:33.438 ********** 2025-06-03 15:27:30.026955 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:27:30.027951 | orchestrator | 2025-06-03 15:27:30.029543 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:27:30.030259 | orchestrator | Tuesday 03 June 2025 15:27:30 +0000 (0:00:00.204) 0:00:33.643 ********** 2025-06-03 15:27:30.225398 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:27:30.225977 | orchestrator | 2025-06-03 15:27:30.227041 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:27:30.227782 | orchestrator | Tuesday 03 June 2025 15:27:30 +0000 (0:00:00.202) 0:00:33.846 ********** 2025-06-03 15:27:30.442692 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:27:30.442858 | orchestrator | 2025-06-03 15:27:30.443088 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:27:30.443347 | orchestrator | Tuesday 03 June 2025 15:27:30 +0000 (0:00:00.217) 0:00:34.063 ********** 2025-06-03 15:27:31.074817 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:27:31.075227 | orchestrator | 2025-06-03 15:27:31.076229 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:27:31.076557 | orchestrator | Tuesday 03 June 2025 15:27:31 +0000 (0:00:00.632) 0:00:34.695 ********** 2025-06-03 15:27:31.286337 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:27:31.287201 | orchestrator | 2025-06-03 15:27:31.287943 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:27:31.290204 | orchestrator | Tuesday 03 June 2025 15:27:31 +0000 (0:00:00.211) 0:00:34.907 ********** 2025-06-03 15:27:31.487574 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:27:31.487666 | orchestrator | 2025-06-03 15:27:31.488610 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:27:31.489508 | orchestrator | Tuesday 03 June 2025 15:27:31 +0000 (0:00:00.200) 0:00:35.107 ********** 2025-06-03 15:27:32.115400 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-06-03 15:27:32.117980 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-06-03 15:27:32.119196 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-06-03 15:27:32.119796 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-06-03 15:27:32.120252 | orchestrator | 2025-06-03 15:27:32.120845 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:27:32.121182 | orchestrator | Tuesday 03 June 2025 15:27:32 +0000 (0:00:00.624) 0:00:35.732 ********** 2025-06-03 15:27:32.314887 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:27:32.315091 | orchestrator | 2025-06-03 15:27:32.315936 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:27:32.316781 | orchestrator | Tuesday 03 June 2025 15:27:32 +0000 (0:00:00.203) 0:00:35.935 ********** 2025-06-03 15:27:32.511113 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:27:32.511253 | orchestrator | 2025-06-03 15:27:32.511334 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:27:32.511940 | orchestrator | Tuesday 03 June 2025 15:27:32 +0000 (0:00:00.195) 0:00:36.131 ********** 2025-06-03 15:27:32.697184 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:27:32.699719 | orchestrator | 2025-06-03 15:27:32.700869 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:27:32.701126 | orchestrator | Tuesday 03 June 2025 15:27:32 +0000 (0:00:00.186) 0:00:36.317 ********** 2025-06-03 15:27:32.904690 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:27:32.904768 | orchestrator | 2025-06-03 15:27:32.905194 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-06-03 15:27:32.906121 | orchestrator | Tuesday 03 June 2025 15:27:32 +0000 (0:00:00.206) 0:00:36.524 ********** 2025-06-03 15:27:33.104720 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2025-06-03 15:27:33.105689 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2025-06-03 15:27:33.106498 | orchestrator | 2025-06-03 15:27:33.107412 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-06-03 15:27:33.108079 | orchestrator | Tuesday 03 June 2025 15:27:33 +0000 (0:00:00.200) 0:00:36.725 ********** 2025-06-03 15:27:33.255257 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:27:33.255749 | orchestrator | 2025-06-03 15:27:33.256928 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-06-03 15:27:33.257452 | orchestrator | Tuesday 03 June 2025 15:27:33 +0000 (0:00:00.149) 0:00:36.874 ********** 2025-06-03 15:27:33.384983 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:27:33.386208 | orchestrator | 2025-06-03 15:27:33.387158 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-06-03 15:27:33.388175 | orchestrator | Tuesday 03 June 2025 15:27:33 +0000 (0:00:00.130) 0:00:37.005 ********** 2025-06-03 15:27:33.524716 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:27:33.526387 | orchestrator | 2025-06-03 15:27:33.527048 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-06-03 15:27:33.528258 | orchestrator | Tuesday 03 June 2025 15:27:33 +0000 (0:00:00.139) 0:00:37.144 ********** 2025-06-03 15:27:33.866902 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:27:33.866990 | orchestrator | 2025-06-03 15:27:33.867390 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-06-03 15:27:33.867863 | orchestrator | Tuesday 03 June 2025 15:27:33 +0000 (0:00:00.343) 0:00:37.487 ********** 2025-06-03 15:27:34.039259 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '610c71bb-335d-5813-8d53-12327c30775e'}}) 2025-06-03 15:27:34.039594 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ae8860ce-b651-5449-9c0b-e6c018225b94'}}) 2025-06-03 15:27:34.039654 | orchestrator | 2025-06-03 15:27:34.040342 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-06-03 15:27:34.040704 | orchestrator | Tuesday 03 June 2025 15:27:34 +0000 (0:00:00.171) 0:00:37.659 ********** 2025-06-03 15:27:34.204756 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '610c71bb-335d-5813-8d53-12327c30775e'}})  2025-06-03 15:27:34.205465 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ae8860ce-b651-5449-9c0b-e6c018225b94'}})  2025-06-03 15:27:34.205925 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:27:34.207146 | orchestrator | 2025-06-03 15:27:34.207932 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-06-03 15:27:34.208715 | orchestrator | Tuesday 03 June 2025 15:27:34 +0000 (0:00:00.165) 0:00:37.825 ********** 2025-06-03 15:27:34.357294 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '610c71bb-335d-5813-8d53-12327c30775e'}})  2025-06-03 15:27:34.357607 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ae8860ce-b651-5449-9c0b-e6c018225b94'}})  2025-06-03 15:27:34.359052 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:27:34.360108 | orchestrator | 2025-06-03 15:27:34.361046 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-06-03 15:27:34.361278 | orchestrator | Tuesday 03 June 2025 15:27:34 +0000 (0:00:00.151) 0:00:37.977 ********** 2025-06-03 15:27:34.512713 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '610c71bb-335d-5813-8d53-12327c30775e'}})  2025-06-03 15:27:34.514778 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ae8860ce-b651-5449-9c0b-e6c018225b94'}})  2025-06-03 15:27:34.515706 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:27:34.515959 | orchestrator | 2025-06-03 15:27:34.517456 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-06-03 15:27:34.517506 | orchestrator | Tuesday 03 June 2025 15:27:34 +0000 (0:00:00.155) 0:00:38.132 ********** 2025-06-03 15:27:34.642924 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:27:34.644049 | orchestrator | 2025-06-03 15:27:34.644081 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-06-03 15:27:34.644096 | orchestrator | Tuesday 03 June 2025 15:27:34 +0000 (0:00:00.131) 0:00:38.264 ********** 2025-06-03 15:27:34.794136 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:27:34.794240 | orchestrator | 2025-06-03 15:27:34.794821 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-06-03 15:27:34.795416 | orchestrator | Tuesday 03 June 2025 15:27:34 +0000 (0:00:00.149) 0:00:38.414 ********** 2025-06-03 15:27:34.928145 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:27:34.928355 | orchestrator | 2025-06-03 15:27:34.929217 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-06-03 15:27:34.929719 | orchestrator | Tuesday 03 June 2025 15:27:34 +0000 (0:00:00.134) 0:00:38.548 ********** 2025-06-03 15:27:35.052831 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:27:35.052938 | orchestrator | 2025-06-03 15:27:35.053370 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-06-03 15:27:35.053940 | orchestrator | Tuesday 03 June 2025 15:27:35 +0000 (0:00:00.125) 0:00:38.674 ********** 2025-06-03 15:27:35.191885 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:27:35.191957 | orchestrator | 2025-06-03 15:27:35.194224 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-06-03 15:27:35.197130 | orchestrator | Tuesday 03 June 2025 15:27:35 +0000 (0:00:00.137) 0:00:38.811 ********** 2025-06-03 15:27:35.337660 | orchestrator | ok: [testbed-node-5] => { 2025-06-03 15:27:35.337868 | orchestrator |  "ceph_osd_devices": { 2025-06-03 15:27:35.338601 | orchestrator |  "sdb": { 2025-06-03 15:27:35.338925 | orchestrator |  "osd_lvm_uuid": "610c71bb-335d-5813-8d53-12327c30775e" 2025-06-03 15:27:35.339489 | orchestrator |  }, 2025-06-03 15:27:35.340707 | orchestrator |  "sdc": { 2025-06-03 15:27:35.340989 | orchestrator |  "osd_lvm_uuid": "ae8860ce-b651-5449-9c0b-e6c018225b94" 2025-06-03 15:27:35.341470 | orchestrator |  } 2025-06-03 15:27:35.342552 | orchestrator |  } 2025-06-03 15:27:35.342921 | orchestrator | } 2025-06-03 15:27:35.343451 | orchestrator | 2025-06-03 15:27:35.344178 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-06-03 15:27:35.346264 | orchestrator | Tuesday 03 June 2025 15:27:35 +0000 (0:00:00.147) 0:00:38.958 ********** 2025-06-03 15:27:35.474306 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:27:35.474922 | orchestrator | 2025-06-03 15:27:35.475562 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-06-03 15:27:35.476579 | orchestrator | Tuesday 03 June 2025 15:27:35 +0000 (0:00:00.136) 0:00:39.095 ********** 2025-06-03 15:27:35.821870 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:27:35.822173 | orchestrator | 2025-06-03 15:27:35.823132 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-06-03 15:27:35.823997 | orchestrator | Tuesday 03 June 2025 15:27:35 +0000 (0:00:00.346) 0:00:39.441 ********** 2025-06-03 15:27:35.959765 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:27:35.960492 | orchestrator | 2025-06-03 15:27:35.961385 | orchestrator | TASK [Print configuration data] ************************************************ 2025-06-03 15:27:35.962137 | orchestrator | Tuesday 03 June 2025 15:27:35 +0000 (0:00:00.139) 0:00:39.581 ********** 2025-06-03 15:27:36.168575 | orchestrator | changed: [testbed-node-5] => { 2025-06-03 15:27:36.169494 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-06-03 15:27:36.171063 | orchestrator |  "ceph_osd_devices": { 2025-06-03 15:27:36.172237 | orchestrator |  "sdb": { 2025-06-03 15:27:36.173608 | orchestrator |  "osd_lvm_uuid": "610c71bb-335d-5813-8d53-12327c30775e" 2025-06-03 15:27:36.174782 | orchestrator |  }, 2025-06-03 15:27:36.175654 | orchestrator |  "sdc": { 2025-06-03 15:27:36.176583 | orchestrator |  "osd_lvm_uuid": "ae8860ce-b651-5449-9c0b-e6c018225b94" 2025-06-03 15:27:36.177527 | orchestrator |  } 2025-06-03 15:27:36.178331 | orchestrator |  }, 2025-06-03 15:27:36.179112 | orchestrator |  "lvm_volumes": [ 2025-06-03 15:27:36.179822 | orchestrator |  { 2025-06-03 15:27:36.180504 | orchestrator |  "data": "osd-block-610c71bb-335d-5813-8d53-12327c30775e", 2025-06-03 15:27:36.181254 | orchestrator |  "data_vg": "ceph-610c71bb-335d-5813-8d53-12327c30775e" 2025-06-03 15:27:36.182123 | orchestrator |  }, 2025-06-03 15:27:36.182578 | orchestrator |  { 2025-06-03 15:27:36.183070 | orchestrator |  "data": "osd-block-ae8860ce-b651-5449-9c0b-e6c018225b94", 2025-06-03 15:27:36.183659 | orchestrator |  "data_vg": "ceph-ae8860ce-b651-5449-9c0b-e6c018225b94" 2025-06-03 15:27:36.184062 | orchestrator |  } 2025-06-03 15:27:36.184645 | orchestrator |  ] 2025-06-03 15:27:36.185214 | orchestrator |  } 2025-06-03 15:27:36.185885 | orchestrator | } 2025-06-03 15:27:36.186371 | orchestrator | 2025-06-03 15:27:36.186753 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-06-03 15:27:36.187229 | orchestrator | Tuesday 03 June 2025 15:27:36 +0000 (0:00:00.208) 0:00:39.789 ********** 2025-06-03 15:27:37.117215 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-06-03 15:27:37.117422 | orchestrator | 2025-06-03 15:27:37.117973 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-03 15:27:37.118837 | orchestrator | 2025-06-03 15:27:37 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-03 15:27:37.119163 | orchestrator | 2025-06-03 15:27:37 | INFO  | Please wait and do not abort execution. 2025-06-03 15:27:37.120125 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-06-03 15:27:37.121477 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-06-03 15:27:37.122372 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-06-03 15:27:37.122964 | orchestrator | 2025-06-03 15:27:37.123652 | orchestrator | 2025-06-03 15:27:37.124171 | orchestrator | 2025-06-03 15:27:37.125488 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-03 15:27:37.125918 | orchestrator | Tuesday 03 June 2025 15:27:37 +0000 (0:00:00.946) 0:00:40.735 ********** 2025-06-03 15:27:37.126336 | orchestrator | =============================================================================== 2025-06-03 15:27:37.127165 | orchestrator | Write configuration file ------------------------------------------------ 4.33s 2025-06-03 15:27:37.127290 | orchestrator | Get initial list of available block devices ----------------------------- 1.23s 2025-06-03 15:27:37.127915 | orchestrator | Add known partitions to the list of available block devices ------------- 1.11s 2025-06-03 15:27:37.128410 | orchestrator | Add known links to the list of available block devices ------------------ 1.05s 2025-06-03 15:27:37.129021 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.90s 2025-06-03 15:27:37.129496 | orchestrator | Add known partitions to the list of available block devices ------------- 0.80s 2025-06-03 15:27:37.129931 | orchestrator | Add known links to the list of available block devices ------------------ 0.80s 2025-06-03 15:27:37.130634 | orchestrator | Generate lvm_volumes structure (block + db + wal) ----------------------- 0.73s 2025-06-03 15:27:37.131758 | orchestrator | Set UUIDs for OSD VGs/LVs ----------------------------------------------- 0.72s 2025-06-03 15:27:37.132173 | orchestrator | Add known partitions to the list of available block devices ------------- 0.68s 2025-06-03 15:27:37.132627 | orchestrator | Print configuration data ------------------------------------------------ 0.63s 2025-06-03 15:27:37.133057 | orchestrator | Add known partitions to the list of available block devices ------------- 0.63s 2025-06-03 15:27:37.133689 | orchestrator | Add known partitions to the list of available block devices ------------- 0.62s 2025-06-03 15:27:37.134252 | orchestrator | Add known links to the list of available block devices ------------------ 0.62s 2025-06-03 15:27:37.135078 | orchestrator | Set WAL devices config data --------------------------------------------- 0.62s 2025-06-03 15:27:37.135721 | orchestrator | Add known partitions to the list of available block devices ------------- 0.61s 2025-06-03 15:27:37.135914 | orchestrator | Print DB devices -------------------------------------------------------- 0.61s 2025-06-03 15:27:37.136572 | orchestrator | Define lvm_volumes structures ------------------------------------------- 0.61s 2025-06-03 15:27:37.136935 | orchestrator | Add known links to the list of available block devices ------------------ 0.61s 2025-06-03 15:27:37.137540 | orchestrator | Add known links to the list of available block devices ------------------ 0.58s 2025-06-03 15:27:49.577782 | orchestrator | Registering Redlock._acquired_script 2025-06-03 15:27:49.577885 | orchestrator | Registering Redlock._extend_script 2025-06-03 15:27:49.577899 | orchestrator | Registering Redlock._release_script 2025-06-03 15:27:49.639164 | orchestrator | 2025-06-03 15:27:49 | INFO  | Task c6744498-3f2e-4636-8edf-d00cbe270b1e (sync inventory) is running in background. Output coming soon. 2025-06-03 15:28:08.115405 | orchestrator | 2025-06-03 15:27:50 | INFO  | Starting group_vars file reorganization 2025-06-03 15:28:08.115585 | orchestrator | 2025-06-03 15:27:50 | INFO  | Moved 0 file(s) to their respective directories 2025-06-03 15:28:08.115603 | orchestrator | 2025-06-03 15:27:50 | INFO  | Group_vars file reorganization completed 2025-06-03 15:28:08.115614 | orchestrator | 2025-06-03 15:27:52 | INFO  | Starting variable preparation from inventory 2025-06-03 15:28:08.115626 | orchestrator | 2025-06-03 15:27:54 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2025-06-03 15:28:08.115638 | orchestrator | 2025-06-03 15:27:54 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2025-06-03 15:28:08.115674 | orchestrator | 2025-06-03 15:27:54 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2025-06-03 15:28:08.115686 | orchestrator | 2025-06-03 15:27:54 | INFO  | 3 file(s) written, 6 host(s) processed 2025-06-03 15:28:08.115698 | orchestrator | 2025-06-03 15:27:54 | INFO  | Variable preparation completed: 2025-06-03 15:28:08.115709 | orchestrator | 2025-06-03 15:27:55 | INFO  | Starting inventory overwrite handling 2025-06-03 15:28:08.115720 | orchestrator | 2025-06-03 15:27:55 | INFO  | Handling group overwrites in 99-overwrite 2025-06-03 15:28:08.115731 | orchestrator | 2025-06-03 15:27:55 | INFO  | Removing group frr:children from 60-generic 2025-06-03 15:28:08.115742 | orchestrator | 2025-06-03 15:27:55 | INFO  | Removing group storage:children from 50-kolla 2025-06-03 15:28:08.115752 | orchestrator | 2025-06-03 15:27:55 | INFO  | Removing group netbird:children from 50-infrastruture 2025-06-03 15:28:08.115773 | orchestrator | 2025-06-03 15:27:55 | INFO  | Removing group ceph-rgw from 50-ceph 2025-06-03 15:28:08.115785 | orchestrator | 2025-06-03 15:27:55 | INFO  | Removing group ceph-mds from 50-ceph 2025-06-03 15:28:08.115796 | orchestrator | 2025-06-03 15:27:55 | INFO  | Handling group overwrites in 20-roles 2025-06-03 15:28:08.115807 | orchestrator | 2025-06-03 15:27:55 | INFO  | Removing group k3s_node from 50-infrastruture 2025-06-03 15:28:08.115818 | orchestrator | 2025-06-03 15:27:55 | INFO  | Removed 6 group(s) in total 2025-06-03 15:28:08.115829 | orchestrator | 2025-06-03 15:27:55 | INFO  | Inventory overwrite handling completed 2025-06-03 15:28:08.115840 | orchestrator | 2025-06-03 15:27:56 | INFO  | Starting merge of inventory files 2025-06-03 15:28:08.115851 | orchestrator | 2025-06-03 15:27:56 | INFO  | Inventory files merged successfully 2025-06-03 15:28:08.115861 | orchestrator | 2025-06-03 15:28:00 | INFO  | Generating ClusterShell configuration from Ansible inventory 2025-06-03 15:28:08.115872 | orchestrator | 2025-06-03 15:28:06 | INFO  | Successfully wrote ClusterShell configuration 2025-06-03 15:28:08.115884 | orchestrator | [master e16f5e0] 2025-06-03-15-28 2025-06-03 15:28:08.115896 | orchestrator | 1 file changed, 30 insertions(+), 9 deletions(-) 2025-06-03 15:28:10.042171 | orchestrator | 2025-06-03 15:28:10 | INFO  | Task e03a91ab-709c-4f32-b0a9-ba144093f642 (ceph-create-lvm-devices) was prepared for execution. 2025-06-03 15:28:10.042270 | orchestrator | 2025-06-03 15:28:10 | INFO  | It takes a moment until task e03a91ab-709c-4f32-b0a9-ba144093f642 (ceph-create-lvm-devices) has been started and output is visible here. 2025-06-03 15:28:14.105361 | orchestrator | 2025-06-03 15:28:14.105639 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-06-03 15:28:14.106545 | orchestrator | 2025-06-03 15:28:14.107047 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-06-03 15:28:14.108223 | orchestrator | Tuesday 03 June 2025 15:28:14 +0000 (0:00:00.298) 0:00:00.298 ********** 2025-06-03 15:28:14.411797 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-03 15:28:14.412794 | orchestrator | 2025-06-03 15:28:14.413614 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-06-03 15:28:14.414512 | orchestrator | Tuesday 03 June 2025 15:28:14 +0000 (0:00:00.307) 0:00:00.605 ********** 2025-06-03 15:28:14.629290 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:28:14.629889 | orchestrator | 2025-06-03 15:28:14.630595 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:28:14.631474 | orchestrator | Tuesday 03 June 2025 15:28:14 +0000 (0:00:00.218) 0:00:00.824 ********** 2025-06-03 15:28:15.021305 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-06-03 15:28:15.023288 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-06-03 15:28:15.023378 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-06-03 15:28:15.023402 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-06-03 15:28:15.023533 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-06-03 15:28:15.023900 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-06-03 15:28:15.024651 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-06-03 15:28:15.025070 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-06-03 15:28:15.025551 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-06-03 15:28:15.026424 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-06-03 15:28:15.027020 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-06-03 15:28:15.027228 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-06-03 15:28:15.027400 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-06-03 15:28:15.028788 | orchestrator | 2025-06-03 15:28:15.030186 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:28:15.030333 | orchestrator | Tuesday 03 June 2025 15:28:15 +0000 (0:00:00.390) 0:00:01.214 ********** 2025-06-03 15:28:15.489262 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:28:15.489355 | orchestrator | 2025-06-03 15:28:15.489858 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:28:15.490852 | orchestrator | Tuesday 03 June 2025 15:28:15 +0000 (0:00:00.468) 0:00:01.683 ********** 2025-06-03 15:28:15.684560 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:28:15.685370 | orchestrator | 2025-06-03 15:28:15.686167 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:28:15.689020 | orchestrator | Tuesday 03 June 2025 15:28:15 +0000 (0:00:00.195) 0:00:01.878 ********** 2025-06-03 15:28:15.872021 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:28:15.872228 | orchestrator | 2025-06-03 15:28:15.873658 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:28:15.874528 | orchestrator | Tuesday 03 June 2025 15:28:15 +0000 (0:00:00.187) 0:00:02.065 ********** 2025-06-03 15:28:16.063707 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:28:16.063887 | orchestrator | 2025-06-03 15:28:16.064788 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:28:16.065425 | orchestrator | Tuesday 03 June 2025 15:28:16 +0000 (0:00:00.192) 0:00:02.258 ********** 2025-06-03 15:28:16.264417 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:28:16.264718 | orchestrator | 2025-06-03 15:28:16.266109 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:28:16.266809 | orchestrator | Tuesday 03 June 2025 15:28:16 +0000 (0:00:00.199) 0:00:02.458 ********** 2025-06-03 15:28:16.453920 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:28:16.454651 | orchestrator | 2025-06-03 15:28:16.454989 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:28:16.456177 | orchestrator | Tuesday 03 June 2025 15:28:16 +0000 (0:00:00.190) 0:00:02.649 ********** 2025-06-03 15:28:16.638079 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:28:16.638184 | orchestrator | 2025-06-03 15:28:16.638325 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:28:16.638733 | orchestrator | Tuesday 03 June 2025 15:28:16 +0000 (0:00:00.184) 0:00:02.833 ********** 2025-06-03 15:28:16.835136 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:28:16.836539 | orchestrator | 2025-06-03 15:28:16.837144 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:28:16.838234 | orchestrator | Tuesday 03 June 2025 15:28:16 +0000 (0:00:00.196) 0:00:03.030 ********** 2025-06-03 15:28:17.219343 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_7e1af086-74b9-4b96-b1ab-e1589a6f5143) 2025-06-03 15:28:17.220128 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_7e1af086-74b9-4b96-b1ab-e1589a6f5143) 2025-06-03 15:28:17.220571 | orchestrator | 2025-06-03 15:28:17.221561 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:28:17.222567 | orchestrator | Tuesday 03 June 2025 15:28:17 +0000 (0:00:00.384) 0:00:03.414 ********** 2025-06-03 15:28:17.592922 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_5c901d52-eede-42c5-873c-7ade3ca032e1) 2025-06-03 15:28:17.593282 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_5c901d52-eede-42c5-873c-7ade3ca032e1) 2025-06-03 15:28:17.594630 | orchestrator | 2025-06-03 15:28:17.595417 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:28:17.596903 | orchestrator | Tuesday 03 June 2025 15:28:17 +0000 (0:00:00.374) 0:00:03.788 ********** 2025-06-03 15:28:18.098141 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_b4ac7e97-dff3-4114-bb9f-c387d4fd8c04) 2025-06-03 15:28:18.098312 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_b4ac7e97-dff3-4114-bb9f-c387d4fd8c04) 2025-06-03 15:28:18.099628 | orchestrator | 2025-06-03 15:28:18.100396 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:28:18.101126 | orchestrator | Tuesday 03 June 2025 15:28:18 +0000 (0:00:00.504) 0:00:04.292 ********** 2025-06-03 15:28:18.648095 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_61b072b3-0d8d-4456-975d-55fef61370d3) 2025-06-03 15:28:18.648774 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_61b072b3-0d8d-4456-975d-55fef61370d3) 2025-06-03 15:28:18.649306 | orchestrator | 2025-06-03 15:28:18.650725 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:28:18.651228 | orchestrator | Tuesday 03 June 2025 15:28:18 +0000 (0:00:00.549) 0:00:04.842 ********** 2025-06-03 15:28:19.204186 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-06-03 15:28:19.204353 | orchestrator | 2025-06-03 15:28:19.205159 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:28:19.205779 | orchestrator | Tuesday 03 June 2025 15:28:19 +0000 (0:00:00.558) 0:00:05.400 ********** 2025-06-03 15:28:19.562156 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-06-03 15:28:19.562341 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-06-03 15:28:19.563311 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-06-03 15:28:19.564262 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-06-03 15:28:19.565093 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-06-03 15:28:19.566090 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-06-03 15:28:19.567046 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-06-03 15:28:19.568022 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-06-03 15:28:19.568757 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-06-03 15:28:19.569102 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-06-03 15:28:19.569720 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-06-03 15:28:19.570406 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-06-03 15:28:19.570989 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-06-03 15:28:19.571404 | orchestrator | 2025-06-03 15:28:19.572378 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:28:19.572856 | orchestrator | Tuesday 03 June 2025 15:28:19 +0000 (0:00:00.356) 0:00:05.756 ********** 2025-06-03 15:28:19.737064 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:28:19.737149 | orchestrator | 2025-06-03 15:28:19.737163 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:28:19.737176 | orchestrator | Tuesday 03 June 2025 15:28:19 +0000 (0:00:00.174) 0:00:05.930 ********** 2025-06-03 15:28:19.913143 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:28:19.913278 | orchestrator | 2025-06-03 15:28:19.913877 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:28:19.914307 | orchestrator | Tuesday 03 June 2025 15:28:19 +0000 (0:00:00.178) 0:00:06.109 ********** 2025-06-03 15:28:20.084602 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:28:20.085111 | orchestrator | 2025-06-03 15:28:20.085825 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:28:20.086607 | orchestrator | Tuesday 03 June 2025 15:28:20 +0000 (0:00:00.171) 0:00:06.280 ********** 2025-06-03 15:28:20.259291 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:28:20.260188 | orchestrator | 2025-06-03 15:28:20.261175 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:28:20.262454 | orchestrator | Tuesday 03 June 2025 15:28:20 +0000 (0:00:00.174) 0:00:06.455 ********** 2025-06-03 15:28:20.423407 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:28:20.423689 | orchestrator | 2025-06-03 15:28:20.424779 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:28:20.425315 | orchestrator | Tuesday 03 June 2025 15:28:20 +0000 (0:00:00.163) 0:00:06.619 ********** 2025-06-03 15:28:20.591623 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:28:20.591952 | orchestrator | 2025-06-03 15:28:20.592732 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:28:20.593428 | orchestrator | Tuesday 03 June 2025 15:28:20 +0000 (0:00:00.168) 0:00:06.787 ********** 2025-06-03 15:28:20.762421 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:28:20.762851 | orchestrator | 2025-06-03 15:28:20.763708 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:28:20.764765 | orchestrator | Tuesday 03 June 2025 15:28:20 +0000 (0:00:00.171) 0:00:06.958 ********** 2025-06-03 15:28:20.923216 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:28:20.923382 | orchestrator | 2025-06-03 15:28:20.924281 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:28:20.925811 | orchestrator | Tuesday 03 June 2025 15:28:20 +0000 (0:00:00.160) 0:00:07.119 ********** 2025-06-03 15:28:21.771764 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-06-03 15:28:21.772693 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-06-03 15:28:21.772915 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-06-03 15:28:21.773809 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-06-03 15:28:21.774246 | orchestrator | 2025-06-03 15:28:21.774896 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:28:21.775521 | orchestrator | Tuesday 03 June 2025 15:28:21 +0000 (0:00:00.846) 0:00:07.966 ********** 2025-06-03 15:28:21.945094 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:28:21.945332 | orchestrator | 2025-06-03 15:28:21.945742 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:28:21.946789 | orchestrator | Tuesday 03 June 2025 15:28:21 +0000 (0:00:00.174) 0:00:08.141 ********** 2025-06-03 15:28:22.129384 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:28:22.129586 | orchestrator | 2025-06-03 15:28:22.129966 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:28:22.130534 | orchestrator | Tuesday 03 June 2025 15:28:22 +0000 (0:00:00.184) 0:00:08.325 ********** 2025-06-03 15:28:22.304152 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:28:22.305046 | orchestrator | 2025-06-03 15:28:22.306279 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:28:22.306654 | orchestrator | Tuesday 03 June 2025 15:28:22 +0000 (0:00:00.174) 0:00:08.500 ********** 2025-06-03 15:28:22.498996 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:28:22.499615 | orchestrator | 2025-06-03 15:28:22.501261 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-06-03 15:28:22.502104 | orchestrator | Tuesday 03 June 2025 15:28:22 +0000 (0:00:00.194) 0:00:08.694 ********** 2025-06-03 15:28:22.629821 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:28:22.629980 | orchestrator | 2025-06-03 15:28:22.630853 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-06-03 15:28:22.631232 | orchestrator | Tuesday 03 June 2025 15:28:22 +0000 (0:00:00.130) 0:00:08.825 ********** 2025-06-03 15:28:22.832701 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '5a262827-4eba-5d37-ab06-09e1d49a835c'}}) 2025-06-03 15:28:22.833117 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'd47078ac-4564-569b-bfa7-6d988d420f95'}}) 2025-06-03 15:28:22.834709 | orchestrator | 2025-06-03 15:28:22.836167 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-06-03 15:28:22.836947 | orchestrator | Tuesday 03 June 2025 15:28:22 +0000 (0:00:00.202) 0:00:09.028 ********** 2025-06-03 15:28:24.768025 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-5a262827-4eba-5d37-ab06-09e1d49a835c', 'data_vg': 'ceph-5a262827-4eba-5d37-ab06-09e1d49a835c'}) 2025-06-03 15:28:24.768163 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-d47078ac-4564-569b-bfa7-6d988d420f95', 'data_vg': 'ceph-d47078ac-4564-569b-bfa7-6d988d420f95'}) 2025-06-03 15:28:24.769047 | orchestrator | 2025-06-03 15:28:24.771211 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-06-03 15:28:24.771764 | orchestrator | Tuesday 03 June 2025 15:28:24 +0000 (0:00:01.934) 0:00:10.962 ********** 2025-06-03 15:28:24.885997 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5a262827-4eba-5d37-ab06-09e1d49a835c', 'data_vg': 'ceph-5a262827-4eba-5d37-ab06-09e1d49a835c'})  2025-06-03 15:28:24.886187 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d47078ac-4564-569b-bfa7-6d988d420f95', 'data_vg': 'ceph-d47078ac-4564-569b-bfa7-6d988d420f95'})  2025-06-03 15:28:24.886925 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:28:24.887559 | orchestrator | 2025-06-03 15:28:24.888151 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-06-03 15:28:24.888893 | orchestrator | Tuesday 03 June 2025 15:28:24 +0000 (0:00:00.117) 0:00:11.080 ********** 2025-06-03 15:28:26.278795 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-5a262827-4eba-5d37-ab06-09e1d49a835c', 'data_vg': 'ceph-5a262827-4eba-5d37-ab06-09e1d49a835c'}) 2025-06-03 15:28:26.278964 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-d47078ac-4564-569b-bfa7-6d988d420f95', 'data_vg': 'ceph-d47078ac-4564-569b-bfa7-6d988d420f95'}) 2025-06-03 15:28:26.280269 | orchestrator | 2025-06-03 15:28:26.281326 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-06-03 15:28:26.282122 | orchestrator | Tuesday 03 June 2025 15:28:26 +0000 (0:00:01.391) 0:00:12.472 ********** 2025-06-03 15:28:26.416889 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5a262827-4eba-5d37-ab06-09e1d49a835c', 'data_vg': 'ceph-5a262827-4eba-5d37-ab06-09e1d49a835c'})  2025-06-03 15:28:26.417643 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d47078ac-4564-569b-bfa7-6d988d420f95', 'data_vg': 'ceph-d47078ac-4564-569b-bfa7-6d988d420f95'})  2025-06-03 15:28:26.418851 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:28:26.419663 | orchestrator | 2025-06-03 15:28:26.419979 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-06-03 15:28:26.420802 | orchestrator | Tuesday 03 June 2025 15:28:26 +0000 (0:00:00.140) 0:00:12.612 ********** 2025-06-03 15:28:26.540908 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:28:26.541213 | orchestrator | 2025-06-03 15:28:26.542453 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-06-03 15:28:26.542741 | orchestrator | Tuesday 03 June 2025 15:28:26 +0000 (0:00:00.123) 0:00:12.736 ********** 2025-06-03 15:28:26.790077 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5a262827-4eba-5d37-ab06-09e1d49a835c', 'data_vg': 'ceph-5a262827-4eba-5d37-ab06-09e1d49a835c'})  2025-06-03 15:28:26.790710 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d47078ac-4564-569b-bfa7-6d988d420f95', 'data_vg': 'ceph-d47078ac-4564-569b-bfa7-6d988d420f95'})  2025-06-03 15:28:26.792108 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:28:26.792932 | orchestrator | 2025-06-03 15:28:26.793636 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-06-03 15:28:26.794437 | orchestrator | Tuesday 03 June 2025 15:28:26 +0000 (0:00:00.249) 0:00:12.986 ********** 2025-06-03 15:28:26.915014 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:28:26.915229 | orchestrator | 2025-06-03 15:28:26.916033 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-06-03 15:28:26.916561 | orchestrator | Tuesday 03 June 2025 15:28:26 +0000 (0:00:00.125) 0:00:13.111 ********** 2025-06-03 15:28:27.055493 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5a262827-4eba-5d37-ab06-09e1d49a835c', 'data_vg': 'ceph-5a262827-4eba-5d37-ab06-09e1d49a835c'})  2025-06-03 15:28:27.055691 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d47078ac-4564-569b-bfa7-6d988d420f95', 'data_vg': 'ceph-d47078ac-4564-569b-bfa7-6d988d420f95'})  2025-06-03 15:28:27.056405 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:28:27.057807 | orchestrator | 2025-06-03 15:28:27.057848 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-06-03 15:28:27.058391 | orchestrator | Tuesday 03 June 2025 15:28:27 +0000 (0:00:00.138) 0:00:13.250 ********** 2025-06-03 15:28:27.171270 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:28:27.171798 | orchestrator | 2025-06-03 15:28:27.172362 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-06-03 15:28:27.173089 | orchestrator | Tuesday 03 June 2025 15:28:27 +0000 (0:00:00.116) 0:00:13.367 ********** 2025-06-03 15:28:27.301027 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5a262827-4eba-5d37-ab06-09e1d49a835c', 'data_vg': 'ceph-5a262827-4eba-5d37-ab06-09e1d49a835c'})  2025-06-03 15:28:27.301205 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d47078ac-4564-569b-bfa7-6d988d420f95', 'data_vg': 'ceph-d47078ac-4564-569b-bfa7-6d988d420f95'})  2025-06-03 15:28:27.302011 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:28:27.302955 | orchestrator | 2025-06-03 15:28:27.303198 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-06-03 15:28:27.303957 | orchestrator | Tuesday 03 June 2025 15:28:27 +0000 (0:00:00.129) 0:00:13.496 ********** 2025-06-03 15:28:27.426184 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:28:27.426359 | orchestrator | 2025-06-03 15:28:27.426870 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-06-03 15:28:27.427407 | orchestrator | Tuesday 03 June 2025 15:28:27 +0000 (0:00:00.125) 0:00:13.621 ********** 2025-06-03 15:28:27.572309 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5a262827-4eba-5d37-ab06-09e1d49a835c', 'data_vg': 'ceph-5a262827-4eba-5d37-ab06-09e1d49a835c'})  2025-06-03 15:28:27.572652 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d47078ac-4564-569b-bfa7-6d988d420f95', 'data_vg': 'ceph-d47078ac-4564-569b-bfa7-6d988d420f95'})  2025-06-03 15:28:27.573989 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:28:27.574112 | orchestrator | 2025-06-03 15:28:27.574889 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-06-03 15:28:27.575549 | orchestrator | Tuesday 03 June 2025 15:28:27 +0000 (0:00:00.146) 0:00:13.768 ********** 2025-06-03 15:28:27.697931 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5a262827-4eba-5d37-ab06-09e1d49a835c', 'data_vg': 'ceph-5a262827-4eba-5d37-ab06-09e1d49a835c'})  2025-06-03 15:28:27.698315 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d47078ac-4564-569b-bfa7-6d988d420f95', 'data_vg': 'ceph-d47078ac-4564-569b-bfa7-6d988d420f95'})  2025-06-03 15:28:27.699779 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:28:27.699866 | orchestrator | 2025-06-03 15:28:27.700201 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-06-03 15:28:27.700711 | orchestrator | Tuesday 03 June 2025 15:28:27 +0000 (0:00:00.125) 0:00:13.894 ********** 2025-06-03 15:28:27.830815 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5a262827-4eba-5d37-ab06-09e1d49a835c', 'data_vg': 'ceph-5a262827-4eba-5d37-ab06-09e1d49a835c'})  2025-06-03 15:28:27.831279 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d47078ac-4564-569b-bfa7-6d988d420f95', 'data_vg': 'ceph-d47078ac-4564-569b-bfa7-6d988d420f95'})  2025-06-03 15:28:27.831861 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:28:27.832506 | orchestrator | 2025-06-03 15:28:27.833060 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-06-03 15:28:27.833633 | orchestrator | Tuesday 03 June 2025 15:28:27 +0000 (0:00:00.131) 0:00:14.025 ********** 2025-06-03 15:28:27.941767 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:28:27.942313 | orchestrator | 2025-06-03 15:28:27.942964 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-06-03 15:28:27.943502 | orchestrator | Tuesday 03 June 2025 15:28:27 +0000 (0:00:00.112) 0:00:14.137 ********** 2025-06-03 15:28:28.057934 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:28:28.058317 | orchestrator | 2025-06-03 15:28:28.058864 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-06-03 15:28:28.059370 | orchestrator | Tuesday 03 June 2025 15:28:28 +0000 (0:00:00.115) 0:00:14.253 ********** 2025-06-03 15:28:28.182905 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:28:28.183156 | orchestrator | 2025-06-03 15:28:28.183952 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-06-03 15:28:28.185280 | orchestrator | Tuesday 03 June 2025 15:28:28 +0000 (0:00:00.125) 0:00:14.378 ********** 2025-06-03 15:28:28.428793 | orchestrator | ok: [testbed-node-3] => { 2025-06-03 15:28:28.429625 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-06-03 15:28:28.430595 | orchestrator | } 2025-06-03 15:28:28.431375 | orchestrator | 2025-06-03 15:28:28.432171 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-06-03 15:28:28.432944 | orchestrator | Tuesday 03 June 2025 15:28:28 +0000 (0:00:00.246) 0:00:14.625 ********** 2025-06-03 15:28:28.551953 | orchestrator | ok: [testbed-node-3] => { 2025-06-03 15:28:28.552124 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-06-03 15:28:28.552404 | orchestrator | } 2025-06-03 15:28:28.552841 | orchestrator | 2025-06-03 15:28:28.553406 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-06-03 15:28:28.554104 | orchestrator | Tuesday 03 June 2025 15:28:28 +0000 (0:00:00.122) 0:00:14.748 ********** 2025-06-03 15:28:28.672447 | orchestrator | ok: [testbed-node-3] => { 2025-06-03 15:28:28.672638 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-06-03 15:28:28.674157 | orchestrator | } 2025-06-03 15:28:28.674857 | orchestrator | 2025-06-03 15:28:28.675945 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-06-03 15:28:28.676649 | orchestrator | Tuesday 03 June 2025 15:28:28 +0000 (0:00:00.119) 0:00:14.868 ********** 2025-06-03 15:28:29.265018 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:28:29.265217 | orchestrator | 2025-06-03 15:28:29.266204 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-06-03 15:28:29.267188 | orchestrator | Tuesday 03 June 2025 15:28:29 +0000 (0:00:00.591) 0:00:15.459 ********** 2025-06-03 15:28:29.758697 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:28:29.759311 | orchestrator | 2025-06-03 15:28:29.760081 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-06-03 15:28:29.760882 | orchestrator | Tuesday 03 June 2025 15:28:29 +0000 (0:00:00.494) 0:00:15.954 ********** 2025-06-03 15:28:30.287944 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:28:30.288047 | orchestrator | 2025-06-03 15:28:30.288848 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-06-03 15:28:30.289338 | orchestrator | Tuesday 03 June 2025 15:28:30 +0000 (0:00:00.523) 0:00:16.477 ********** 2025-06-03 15:28:30.402115 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:28:30.403193 | orchestrator | 2025-06-03 15:28:30.404170 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-06-03 15:28:30.405298 | orchestrator | Tuesday 03 June 2025 15:28:30 +0000 (0:00:00.119) 0:00:16.597 ********** 2025-06-03 15:28:30.504584 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:28:30.504675 | orchestrator | 2025-06-03 15:28:30.505283 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-06-03 15:28:30.505740 | orchestrator | Tuesday 03 June 2025 15:28:30 +0000 (0:00:00.102) 0:00:16.699 ********** 2025-06-03 15:28:30.611399 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:28:30.611547 | orchestrator | 2025-06-03 15:28:30.611621 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-06-03 15:28:30.612054 | orchestrator | Tuesday 03 June 2025 15:28:30 +0000 (0:00:00.107) 0:00:16.807 ********** 2025-06-03 15:28:30.743661 | orchestrator | ok: [testbed-node-3] => { 2025-06-03 15:28:30.743767 | orchestrator |  "vgs_report": { 2025-06-03 15:28:30.744055 | orchestrator |  "vg": [] 2025-06-03 15:28:30.745292 | orchestrator |  } 2025-06-03 15:28:30.745313 | orchestrator | } 2025-06-03 15:28:30.745325 | orchestrator | 2025-06-03 15:28:30.745639 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-06-03 15:28:30.746152 | orchestrator | Tuesday 03 June 2025 15:28:30 +0000 (0:00:00.131) 0:00:16.939 ********** 2025-06-03 15:28:30.854880 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:28:30.854966 | orchestrator | 2025-06-03 15:28:30.856777 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-06-03 15:28:30.857216 | orchestrator | Tuesday 03 June 2025 15:28:30 +0000 (0:00:00.109) 0:00:17.049 ********** 2025-06-03 15:28:30.963312 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:28:30.963549 | orchestrator | 2025-06-03 15:28:30.964631 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-06-03 15:28:30.965748 | orchestrator | Tuesday 03 June 2025 15:28:30 +0000 (0:00:00.109) 0:00:17.159 ********** 2025-06-03 15:28:31.076586 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:28:31.077017 | orchestrator | 2025-06-03 15:28:31.077811 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-06-03 15:28:31.078766 | orchestrator | Tuesday 03 June 2025 15:28:31 +0000 (0:00:00.112) 0:00:17.272 ********** 2025-06-03 15:28:31.333584 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:28:31.333745 | orchestrator | 2025-06-03 15:28:31.334898 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-06-03 15:28:31.337095 | orchestrator | Tuesday 03 June 2025 15:28:31 +0000 (0:00:00.257) 0:00:17.529 ********** 2025-06-03 15:28:31.459671 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:28:31.459822 | orchestrator | 2025-06-03 15:28:31.461606 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-06-03 15:28:31.462617 | orchestrator | Tuesday 03 June 2025 15:28:31 +0000 (0:00:00.124) 0:00:17.654 ********** 2025-06-03 15:28:31.576015 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:28:31.576111 | orchestrator | 2025-06-03 15:28:31.576126 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-06-03 15:28:31.576196 | orchestrator | Tuesday 03 June 2025 15:28:31 +0000 (0:00:00.117) 0:00:17.772 ********** 2025-06-03 15:28:31.693890 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:28:31.694091 | orchestrator | 2025-06-03 15:28:31.694687 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-06-03 15:28:31.695159 | orchestrator | Tuesday 03 June 2025 15:28:31 +0000 (0:00:00.116) 0:00:17.889 ********** 2025-06-03 15:28:31.809945 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:28:31.810634 | orchestrator | 2025-06-03 15:28:31.811787 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-06-03 15:28:31.811811 | orchestrator | Tuesday 03 June 2025 15:28:31 +0000 (0:00:00.116) 0:00:18.005 ********** 2025-06-03 15:28:31.934841 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:28:31.935004 | orchestrator | 2025-06-03 15:28:31.935808 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-06-03 15:28:31.936343 | orchestrator | Tuesday 03 June 2025 15:28:31 +0000 (0:00:00.123) 0:00:18.129 ********** 2025-06-03 15:28:32.063124 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:28:32.063881 | orchestrator | 2025-06-03 15:28:32.064588 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-06-03 15:28:32.065194 | orchestrator | Tuesday 03 June 2025 15:28:32 +0000 (0:00:00.129) 0:00:18.258 ********** 2025-06-03 15:28:32.185599 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:28:32.185888 | orchestrator | 2025-06-03 15:28:32.186349 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-06-03 15:28:32.186827 | orchestrator | Tuesday 03 June 2025 15:28:32 +0000 (0:00:00.120) 0:00:18.379 ********** 2025-06-03 15:28:32.305209 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:28:32.305283 | orchestrator | 2025-06-03 15:28:32.305293 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-06-03 15:28:32.305353 | orchestrator | Tuesday 03 June 2025 15:28:32 +0000 (0:00:00.119) 0:00:18.499 ********** 2025-06-03 15:28:32.416761 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:28:32.417385 | orchestrator | 2025-06-03 15:28:32.417944 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-06-03 15:28:32.418604 | orchestrator | Tuesday 03 June 2025 15:28:32 +0000 (0:00:00.113) 0:00:18.613 ********** 2025-06-03 15:28:32.527467 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:28:32.527636 | orchestrator | 2025-06-03 15:28:32.527753 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-06-03 15:28:32.527838 | orchestrator | Tuesday 03 June 2025 15:28:32 +0000 (0:00:00.110) 0:00:18.723 ********** 2025-06-03 15:28:32.651552 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5a262827-4eba-5d37-ab06-09e1d49a835c', 'data_vg': 'ceph-5a262827-4eba-5d37-ab06-09e1d49a835c'})  2025-06-03 15:28:32.651905 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d47078ac-4564-569b-bfa7-6d988d420f95', 'data_vg': 'ceph-d47078ac-4564-569b-bfa7-6d988d420f95'})  2025-06-03 15:28:32.652342 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:28:32.653002 | orchestrator | 2025-06-03 15:28:32.654451 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-06-03 15:28:32.654550 | orchestrator | Tuesday 03 June 2025 15:28:32 +0000 (0:00:00.124) 0:00:18.847 ********** 2025-06-03 15:28:32.897115 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5a262827-4eba-5d37-ab06-09e1d49a835c', 'data_vg': 'ceph-5a262827-4eba-5d37-ab06-09e1d49a835c'})  2025-06-03 15:28:32.897782 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d47078ac-4564-569b-bfa7-6d988d420f95', 'data_vg': 'ceph-d47078ac-4564-569b-bfa7-6d988d420f95'})  2025-06-03 15:28:32.898900 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:28:32.899067 | orchestrator | 2025-06-03 15:28:32.900316 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-06-03 15:28:32.900985 | orchestrator | Tuesday 03 June 2025 15:28:32 +0000 (0:00:00.244) 0:00:19.092 ********** 2025-06-03 15:28:33.026269 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5a262827-4eba-5d37-ab06-09e1d49a835c', 'data_vg': 'ceph-5a262827-4eba-5d37-ab06-09e1d49a835c'})  2025-06-03 15:28:33.026449 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d47078ac-4564-569b-bfa7-6d988d420f95', 'data_vg': 'ceph-d47078ac-4564-569b-bfa7-6d988d420f95'})  2025-06-03 15:28:33.027426 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:28:33.027905 | orchestrator | 2025-06-03 15:28:33.028503 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-06-03 15:28:33.028983 | orchestrator | Tuesday 03 June 2025 15:28:33 +0000 (0:00:00.130) 0:00:19.222 ********** 2025-06-03 15:28:33.177981 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5a262827-4eba-5d37-ab06-09e1d49a835c', 'data_vg': 'ceph-5a262827-4eba-5d37-ab06-09e1d49a835c'})  2025-06-03 15:28:33.178602 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d47078ac-4564-569b-bfa7-6d988d420f95', 'data_vg': 'ceph-d47078ac-4564-569b-bfa7-6d988d420f95'})  2025-06-03 15:28:33.178976 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:28:33.179625 | orchestrator | 2025-06-03 15:28:33.179916 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-06-03 15:28:33.180385 | orchestrator | Tuesday 03 June 2025 15:28:33 +0000 (0:00:00.150) 0:00:19.373 ********** 2025-06-03 15:28:33.307503 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5a262827-4eba-5d37-ab06-09e1d49a835c', 'data_vg': 'ceph-5a262827-4eba-5d37-ab06-09e1d49a835c'})  2025-06-03 15:28:33.307982 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d47078ac-4564-569b-bfa7-6d988d420f95', 'data_vg': 'ceph-d47078ac-4564-569b-bfa7-6d988d420f95'})  2025-06-03 15:28:33.309147 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:28:33.310219 | orchestrator | 2025-06-03 15:28:33.310676 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-06-03 15:28:33.311460 | orchestrator | Tuesday 03 June 2025 15:28:33 +0000 (0:00:00.129) 0:00:19.502 ********** 2025-06-03 15:28:33.446182 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5a262827-4eba-5d37-ab06-09e1d49a835c', 'data_vg': 'ceph-5a262827-4eba-5d37-ab06-09e1d49a835c'})  2025-06-03 15:28:33.447166 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d47078ac-4564-569b-bfa7-6d988d420f95', 'data_vg': 'ceph-d47078ac-4564-569b-bfa7-6d988d420f95'})  2025-06-03 15:28:33.448221 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:28:33.448235 | orchestrator | 2025-06-03 15:28:33.449090 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-06-03 15:28:33.449669 | orchestrator | Tuesday 03 June 2025 15:28:33 +0000 (0:00:00.139) 0:00:19.642 ********** 2025-06-03 15:28:33.576873 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5a262827-4eba-5d37-ab06-09e1d49a835c', 'data_vg': 'ceph-5a262827-4eba-5d37-ab06-09e1d49a835c'})  2025-06-03 15:28:33.577671 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d47078ac-4564-569b-bfa7-6d988d420f95', 'data_vg': 'ceph-d47078ac-4564-569b-bfa7-6d988d420f95'})  2025-06-03 15:28:33.578203 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:28:33.578989 | orchestrator | 2025-06-03 15:28:33.579840 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-06-03 15:28:33.580139 | orchestrator | Tuesday 03 June 2025 15:28:33 +0000 (0:00:00.129) 0:00:19.772 ********** 2025-06-03 15:28:33.711340 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5a262827-4eba-5d37-ab06-09e1d49a835c', 'data_vg': 'ceph-5a262827-4eba-5d37-ab06-09e1d49a835c'})  2025-06-03 15:28:33.711591 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d47078ac-4564-569b-bfa7-6d988d420f95', 'data_vg': 'ceph-d47078ac-4564-569b-bfa7-6d988d420f95'})  2025-06-03 15:28:33.712077 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:28:33.712970 | orchestrator | 2025-06-03 15:28:33.713335 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-06-03 15:28:33.713677 | orchestrator | Tuesday 03 June 2025 15:28:33 +0000 (0:00:00.133) 0:00:19.906 ********** 2025-06-03 15:28:34.204189 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:28:34.204888 | orchestrator | 2025-06-03 15:28:34.205630 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-06-03 15:28:34.206743 | orchestrator | Tuesday 03 June 2025 15:28:34 +0000 (0:00:00.492) 0:00:20.398 ********** 2025-06-03 15:28:34.676121 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:28:34.676290 | orchestrator | 2025-06-03 15:28:34.676717 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-06-03 15:28:34.677143 | orchestrator | Tuesday 03 June 2025 15:28:34 +0000 (0:00:00.473) 0:00:20.872 ********** 2025-06-03 15:28:34.813854 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:28:34.814652 | orchestrator | 2025-06-03 15:28:34.815695 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-06-03 15:28:34.816527 | orchestrator | Tuesday 03 June 2025 15:28:34 +0000 (0:00:00.137) 0:00:21.009 ********** 2025-06-03 15:28:34.981966 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-5a262827-4eba-5d37-ab06-09e1d49a835c', 'vg_name': 'ceph-5a262827-4eba-5d37-ab06-09e1d49a835c'}) 2025-06-03 15:28:34.982841 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-d47078ac-4564-569b-bfa7-6d988d420f95', 'vg_name': 'ceph-d47078ac-4564-569b-bfa7-6d988d420f95'}) 2025-06-03 15:28:34.983612 | orchestrator | 2025-06-03 15:28:34.984889 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-06-03 15:28:34.985783 | orchestrator | Tuesday 03 June 2025 15:28:34 +0000 (0:00:00.166) 0:00:21.176 ********** 2025-06-03 15:28:35.129105 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5a262827-4eba-5d37-ab06-09e1d49a835c', 'data_vg': 'ceph-5a262827-4eba-5d37-ab06-09e1d49a835c'})  2025-06-03 15:28:35.131801 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d47078ac-4564-569b-bfa7-6d988d420f95', 'data_vg': 'ceph-d47078ac-4564-569b-bfa7-6d988d420f95'})  2025-06-03 15:28:35.131826 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:28:35.132724 | orchestrator | 2025-06-03 15:28:35.133417 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-06-03 15:28:35.135396 | orchestrator | Tuesday 03 June 2025 15:28:35 +0000 (0:00:00.145) 0:00:21.321 ********** 2025-06-03 15:28:35.380318 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5a262827-4eba-5d37-ab06-09e1d49a835c', 'data_vg': 'ceph-5a262827-4eba-5d37-ab06-09e1d49a835c'})  2025-06-03 15:28:35.380713 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d47078ac-4564-569b-bfa7-6d988d420f95', 'data_vg': 'ceph-d47078ac-4564-569b-bfa7-6d988d420f95'})  2025-06-03 15:28:35.382404 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:28:35.382430 | orchestrator | 2025-06-03 15:28:35.382971 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-06-03 15:28:35.383774 | orchestrator | Tuesday 03 June 2025 15:28:35 +0000 (0:00:00.255) 0:00:21.576 ********** 2025-06-03 15:28:35.508902 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5a262827-4eba-5d37-ab06-09e1d49a835c', 'data_vg': 'ceph-5a262827-4eba-5d37-ab06-09e1d49a835c'})  2025-06-03 15:28:35.509613 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d47078ac-4564-569b-bfa7-6d988d420f95', 'data_vg': 'ceph-d47078ac-4564-569b-bfa7-6d988d420f95'})  2025-06-03 15:28:35.510397 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:28:35.511230 | orchestrator | 2025-06-03 15:28:35.511823 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-06-03 15:28:35.512863 | orchestrator | Tuesday 03 June 2025 15:28:35 +0000 (0:00:00.128) 0:00:21.704 ********** 2025-06-03 15:28:35.778624 | orchestrator | ok: [testbed-node-3] => { 2025-06-03 15:28:35.779035 | orchestrator |  "lvm_report": { 2025-06-03 15:28:35.780283 | orchestrator |  "lv": [ 2025-06-03 15:28:35.781131 | orchestrator |  { 2025-06-03 15:28:35.781979 | orchestrator |  "lv_name": "osd-block-5a262827-4eba-5d37-ab06-09e1d49a835c", 2025-06-03 15:28:35.783078 | orchestrator |  "vg_name": "ceph-5a262827-4eba-5d37-ab06-09e1d49a835c" 2025-06-03 15:28:35.783846 | orchestrator |  }, 2025-06-03 15:28:35.784556 | orchestrator |  { 2025-06-03 15:28:35.785278 | orchestrator |  "lv_name": "osd-block-d47078ac-4564-569b-bfa7-6d988d420f95", 2025-06-03 15:28:35.786127 | orchestrator |  "vg_name": "ceph-d47078ac-4564-569b-bfa7-6d988d420f95" 2025-06-03 15:28:35.787028 | orchestrator |  } 2025-06-03 15:28:35.788018 | orchestrator |  ], 2025-06-03 15:28:35.788845 | orchestrator |  "pv": [ 2025-06-03 15:28:35.789258 | orchestrator |  { 2025-06-03 15:28:35.789648 | orchestrator |  "pv_name": "/dev/sdb", 2025-06-03 15:28:35.789962 | orchestrator |  "vg_name": "ceph-5a262827-4eba-5d37-ab06-09e1d49a835c" 2025-06-03 15:28:35.790379 | orchestrator |  }, 2025-06-03 15:28:35.790781 | orchestrator |  { 2025-06-03 15:28:35.791152 | orchestrator |  "pv_name": "/dev/sdc", 2025-06-03 15:28:35.791464 | orchestrator |  "vg_name": "ceph-d47078ac-4564-569b-bfa7-6d988d420f95" 2025-06-03 15:28:35.791816 | orchestrator |  } 2025-06-03 15:28:35.792150 | orchestrator |  ] 2025-06-03 15:28:35.792519 | orchestrator |  } 2025-06-03 15:28:35.792895 | orchestrator | } 2025-06-03 15:28:35.793248 | orchestrator | 2025-06-03 15:28:35.793816 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-06-03 15:28:35.794359 | orchestrator | 2025-06-03 15:28:35.794658 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-06-03 15:28:35.794959 | orchestrator | Tuesday 03 June 2025 15:28:35 +0000 (0:00:00.269) 0:00:21.974 ********** 2025-06-03 15:28:35.994354 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-06-03 15:28:35.994930 | orchestrator | 2025-06-03 15:28:35.995192 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-06-03 15:28:35.995702 | orchestrator | Tuesday 03 June 2025 15:28:35 +0000 (0:00:00.214) 0:00:22.189 ********** 2025-06-03 15:28:36.190320 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:28:36.190512 | orchestrator | 2025-06-03 15:28:36.190957 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:28:36.191638 | orchestrator | Tuesday 03 June 2025 15:28:36 +0000 (0:00:00.196) 0:00:22.386 ********** 2025-06-03 15:28:36.555840 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-06-03 15:28:36.556879 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-06-03 15:28:36.557492 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-06-03 15:28:36.558613 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-06-03 15:28:36.559198 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-06-03 15:28:36.560134 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-06-03 15:28:36.561956 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-06-03 15:28:36.562378 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-06-03 15:28:36.562867 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-06-03 15:28:36.563324 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-06-03 15:28:36.563915 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-06-03 15:28:36.564426 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-06-03 15:28:36.565015 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-06-03 15:28:36.565531 | orchestrator | 2025-06-03 15:28:36.566239 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:28:36.566624 | orchestrator | Tuesday 03 June 2025 15:28:36 +0000 (0:00:00.365) 0:00:22.751 ********** 2025-06-03 15:28:36.723964 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:28:36.724214 | orchestrator | 2025-06-03 15:28:36.724887 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:28:36.725616 | orchestrator | Tuesday 03 June 2025 15:28:36 +0000 (0:00:00.168) 0:00:22.920 ********** 2025-06-03 15:28:36.899933 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:28:36.900037 | orchestrator | 2025-06-03 15:28:36.900365 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:28:36.900996 | orchestrator | Tuesday 03 June 2025 15:28:36 +0000 (0:00:00.174) 0:00:23.094 ********** 2025-06-03 15:28:37.070007 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:28:37.070127 | orchestrator | 2025-06-03 15:28:37.070219 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:28:37.070419 | orchestrator | Tuesday 03 June 2025 15:28:37 +0000 (0:00:00.170) 0:00:23.264 ********** 2025-06-03 15:28:37.559885 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:28:37.561132 | orchestrator | 2025-06-03 15:28:37.562446 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:28:37.562644 | orchestrator | Tuesday 03 June 2025 15:28:37 +0000 (0:00:00.488) 0:00:23.753 ********** 2025-06-03 15:28:37.757397 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:28:37.758847 | orchestrator | 2025-06-03 15:28:37.758976 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:28:37.759721 | orchestrator | Tuesday 03 June 2025 15:28:37 +0000 (0:00:00.198) 0:00:23.951 ********** 2025-06-03 15:28:37.952619 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:28:37.954137 | orchestrator | 2025-06-03 15:28:37.955457 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:28:37.955826 | orchestrator | Tuesday 03 June 2025 15:28:37 +0000 (0:00:00.194) 0:00:24.146 ********** 2025-06-03 15:28:38.149082 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:28:38.149297 | orchestrator | 2025-06-03 15:28:38.150188 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:28:38.151667 | orchestrator | Tuesday 03 June 2025 15:28:38 +0000 (0:00:00.195) 0:00:24.342 ********** 2025-06-03 15:28:38.349230 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:28:38.349414 | orchestrator | 2025-06-03 15:28:38.349947 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:28:38.350626 | orchestrator | Tuesday 03 June 2025 15:28:38 +0000 (0:00:00.201) 0:00:24.544 ********** 2025-06-03 15:28:38.764545 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_f6db3371-ad49-4dd9-a193-0ba30b3292ba) 2025-06-03 15:28:38.765068 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_f6db3371-ad49-4dd9-a193-0ba30b3292ba) 2025-06-03 15:28:38.765779 | orchestrator | 2025-06-03 15:28:38.767992 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:28:38.768029 | orchestrator | Tuesday 03 June 2025 15:28:38 +0000 (0:00:00.415) 0:00:24.959 ********** 2025-06-03 15:28:39.203714 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_88cf38eb-fdbf-404b-9f1d-cd32f6bedf4b) 2025-06-03 15:28:39.204146 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_88cf38eb-fdbf-404b-9f1d-cd32f6bedf4b) 2025-06-03 15:28:39.205271 | orchestrator | 2025-06-03 15:28:39.206320 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:28:39.207047 | orchestrator | Tuesday 03 June 2025 15:28:39 +0000 (0:00:00.438) 0:00:25.397 ********** 2025-06-03 15:28:39.631446 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_35e8ec34-b9aa-4705-9105-50464be240ba) 2025-06-03 15:28:39.631604 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_35e8ec34-b9aa-4705-9105-50464be240ba) 2025-06-03 15:28:39.632243 | orchestrator | 2025-06-03 15:28:39.632951 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:28:39.633628 | orchestrator | Tuesday 03 June 2025 15:28:39 +0000 (0:00:00.429) 0:00:25.826 ********** 2025-06-03 15:28:40.055288 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_b1c5376b-f7c7-4aac-a0b2-3df8be7d9631) 2025-06-03 15:28:40.055667 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_b1c5376b-f7c7-4aac-a0b2-3df8be7d9631) 2025-06-03 15:28:40.056516 | orchestrator | 2025-06-03 15:28:40.057175 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:28:40.058174 | orchestrator | Tuesday 03 June 2025 15:28:40 +0000 (0:00:00.422) 0:00:26.249 ********** 2025-06-03 15:28:40.385268 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-06-03 15:28:40.385371 | orchestrator | 2025-06-03 15:28:40.385386 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:28:40.385686 | orchestrator | Tuesday 03 June 2025 15:28:40 +0000 (0:00:00.328) 0:00:26.577 ********** 2025-06-03 15:28:40.977733 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-06-03 15:28:40.979017 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-06-03 15:28:40.980309 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-06-03 15:28:40.980745 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-06-03 15:28:40.981573 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-06-03 15:28:40.982591 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-06-03 15:28:40.983230 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-06-03 15:28:40.984710 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-06-03 15:28:40.985038 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-06-03 15:28:40.985655 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-06-03 15:28:40.986392 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-06-03 15:28:40.987003 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-06-03 15:28:40.987208 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-06-03 15:28:40.987627 | orchestrator | 2025-06-03 15:28:40.987985 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:28:40.988774 | orchestrator | Tuesday 03 June 2025 15:28:40 +0000 (0:00:00.594) 0:00:27.172 ********** 2025-06-03 15:28:41.174879 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:28:41.174973 | orchestrator | 2025-06-03 15:28:41.174987 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:28:41.175000 | orchestrator | Tuesday 03 June 2025 15:28:41 +0000 (0:00:00.196) 0:00:27.368 ********** 2025-06-03 15:28:41.404681 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:28:41.405503 | orchestrator | 2025-06-03 15:28:41.406541 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:28:41.407249 | orchestrator | Tuesday 03 June 2025 15:28:41 +0000 (0:00:00.231) 0:00:27.600 ********** 2025-06-03 15:28:41.608577 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:28:41.609287 | orchestrator | 2025-06-03 15:28:41.610008 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:28:41.610533 | orchestrator | Tuesday 03 June 2025 15:28:41 +0000 (0:00:00.203) 0:00:27.804 ********** 2025-06-03 15:28:41.797257 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:28:41.797417 | orchestrator | 2025-06-03 15:28:41.798114 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:28:41.798847 | orchestrator | Tuesday 03 June 2025 15:28:41 +0000 (0:00:00.187) 0:00:27.991 ********** 2025-06-03 15:28:42.001826 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:28:42.002716 | orchestrator | 2025-06-03 15:28:42.003552 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:28:42.004304 | orchestrator | Tuesday 03 June 2025 15:28:41 +0000 (0:00:00.205) 0:00:28.197 ********** 2025-06-03 15:28:42.203955 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:28:42.204861 | orchestrator | 2025-06-03 15:28:42.205698 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:28:42.206778 | orchestrator | Tuesday 03 June 2025 15:28:42 +0000 (0:00:00.201) 0:00:28.398 ********** 2025-06-03 15:28:42.403099 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:28:42.403672 | orchestrator | 2025-06-03 15:28:42.404418 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:28:42.405279 | orchestrator | Tuesday 03 June 2025 15:28:42 +0000 (0:00:00.199) 0:00:28.598 ********** 2025-06-03 15:28:42.612360 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:28:42.713824 | orchestrator | 2025-06-03 15:28:42.713904 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:28:42.713919 | orchestrator | Tuesday 03 June 2025 15:28:42 +0000 (0:00:00.209) 0:00:28.807 ********** 2025-06-03 15:28:43.482276 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-06-03 15:28:43.482417 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-06-03 15:28:43.483324 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-06-03 15:28:43.485178 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-06-03 15:28:43.485222 | orchestrator | 2025-06-03 15:28:43.485233 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:28:43.485906 | orchestrator | Tuesday 03 June 2025 15:28:43 +0000 (0:00:00.868) 0:00:29.676 ********** 2025-06-03 15:28:43.689578 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:28:43.693910 | orchestrator | 2025-06-03 15:28:43.693953 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:28:43.693968 | orchestrator | Tuesday 03 June 2025 15:28:43 +0000 (0:00:00.209) 0:00:29.885 ********** 2025-06-03 15:28:43.885416 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:28:43.888334 | orchestrator | 2025-06-03 15:28:43.888382 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:28:43.888396 | orchestrator | Tuesday 03 June 2025 15:28:43 +0000 (0:00:00.194) 0:00:30.079 ********** 2025-06-03 15:28:44.553929 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:28:44.556073 | orchestrator | 2025-06-03 15:28:44.557095 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:28:44.557625 | orchestrator | Tuesday 03 June 2025 15:28:44 +0000 (0:00:00.667) 0:00:30.747 ********** 2025-06-03 15:28:44.776121 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:28:44.776293 | orchestrator | 2025-06-03 15:28:44.776376 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-06-03 15:28:44.776972 | orchestrator | Tuesday 03 June 2025 15:28:44 +0000 (0:00:00.224) 0:00:30.971 ********** 2025-06-03 15:28:44.909696 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:28:44.909953 | orchestrator | 2025-06-03 15:28:44.910609 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-06-03 15:28:44.911343 | orchestrator | Tuesday 03 June 2025 15:28:44 +0000 (0:00:00.133) 0:00:31.104 ********** 2025-06-03 15:28:45.094173 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'f00e4ac9-9831-582f-92bc-f2b318630797'}}) 2025-06-03 15:28:45.094413 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '2547461e-5dcb-5046-b3ed-0a182c83d3a8'}}) 2025-06-03 15:28:45.094959 | orchestrator | 2025-06-03 15:28:45.095449 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-06-03 15:28:45.097031 | orchestrator | Tuesday 03 June 2025 15:28:45 +0000 (0:00:00.185) 0:00:31.290 ********** 2025-06-03 15:28:47.046462 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-f00e4ac9-9831-582f-92bc-f2b318630797', 'data_vg': 'ceph-f00e4ac9-9831-582f-92bc-f2b318630797'}) 2025-06-03 15:28:47.046913 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-2547461e-5dcb-5046-b3ed-0a182c83d3a8', 'data_vg': 'ceph-2547461e-5dcb-5046-b3ed-0a182c83d3a8'}) 2025-06-03 15:28:47.047692 | orchestrator | 2025-06-03 15:28:47.049055 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-06-03 15:28:47.049804 | orchestrator | Tuesday 03 June 2025 15:28:47 +0000 (0:00:01.949) 0:00:33.240 ********** 2025-06-03 15:28:47.181118 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f00e4ac9-9831-582f-92bc-f2b318630797', 'data_vg': 'ceph-f00e4ac9-9831-582f-92bc-f2b318630797'})  2025-06-03 15:28:47.181300 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2547461e-5dcb-5046-b3ed-0a182c83d3a8', 'data_vg': 'ceph-2547461e-5dcb-5046-b3ed-0a182c83d3a8'})  2025-06-03 15:28:47.181827 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:28:47.182975 | orchestrator | 2025-06-03 15:28:47.183892 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-06-03 15:28:47.184354 | orchestrator | Tuesday 03 June 2025 15:28:47 +0000 (0:00:00.136) 0:00:33.377 ********** 2025-06-03 15:28:48.496674 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-f00e4ac9-9831-582f-92bc-f2b318630797', 'data_vg': 'ceph-f00e4ac9-9831-582f-92bc-f2b318630797'}) 2025-06-03 15:28:48.496929 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-2547461e-5dcb-5046-b3ed-0a182c83d3a8', 'data_vg': 'ceph-2547461e-5dcb-5046-b3ed-0a182c83d3a8'}) 2025-06-03 15:28:48.498830 | orchestrator | 2025-06-03 15:28:48.499321 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-06-03 15:28:48.500527 | orchestrator | Tuesday 03 June 2025 15:28:48 +0000 (0:00:01.313) 0:00:34.691 ********** 2025-06-03 15:28:48.631299 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f00e4ac9-9831-582f-92bc-f2b318630797', 'data_vg': 'ceph-f00e4ac9-9831-582f-92bc-f2b318630797'})  2025-06-03 15:28:48.631782 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2547461e-5dcb-5046-b3ed-0a182c83d3a8', 'data_vg': 'ceph-2547461e-5dcb-5046-b3ed-0a182c83d3a8'})  2025-06-03 15:28:48.632532 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:28:48.634147 | orchestrator | 2025-06-03 15:28:48.634283 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-06-03 15:28:48.634295 | orchestrator | Tuesday 03 June 2025 15:28:48 +0000 (0:00:00.136) 0:00:34.827 ********** 2025-06-03 15:28:48.740142 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:28:48.741132 | orchestrator | 2025-06-03 15:28:48.742218 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-06-03 15:28:48.742723 | orchestrator | Tuesday 03 June 2025 15:28:48 +0000 (0:00:00.108) 0:00:34.936 ********** 2025-06-03 15:28:48.865080 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f00e4ac9-9831-582f-92bc-f2b318630797', 'data_vg': 'ceph-f00e4ac9-9831-582f-92bc-f2b318630797'})  2025-06-03 15:28:48.865446 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2547461e-5dcb-5046-b3ed-0a182c83d3a8', 'data_vg': 'ceph-2547461e-5dcb-5046-b3ed-0a182c83d3a8'})  2025-06-03 15:28:48.865918 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:28:48.866435 | orchestrator | 2025-06-03 15:28:48.867116 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-06-03 15:28:48.868515 | orchestrator | Tuesday 03 June 2025 15:28:48 +0000 (0:00:00.125) 0:00:35.061 ********** 2025-06-03 15:28:48.976331 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:28:48.976731 | orchestrator | 2025-06-03 15:28:48.977075 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-06-03 15:28:48.977681 | orchestrator | Tuesday 03 June 2025 15:28:48 +0000 (0:00:00.110) 0:00:35.172 ********** 2025-06-03 15:28:49.109280 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f00e4ac9-9831-582f-92bc-f2b318630797', 'data_vg': 'ceph-f00e4ac9-9831-582f-92bc-f2b318630797'})  2025-06-03 15:28:49.109435 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2547461e-5dcb-5046-b3ed-0a182c83d3a8', 'data_vg': 'ceph-2547461e-5dcb-5046-b3ed-0a182c83d3a8'})  2025-06-03 15:28:49.110232 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:28:49.110943 | orchestrator | 2025-06-03 15:28:49.110965 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-06-03 15:28:49.111168 | orchestrator | Tuesday 03 June 2025 15:28:49 +0000 (0:00:00.132) 0:00:35.304 ********** 2025-06-03 15:28:49.364442 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:28:49.364792 | orchestrator | 2025-06-03 15:28:49.365665 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-06-03 15:28:49.366544 | orchestrator | Tuesday 03 June 2025 15:28:49 +0000 (0:00:00.255) 0:00:35.560 ********** 2025-06-03 15:28:49.503463 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f00e4ac9-9831-582f-92bc-f2b318630797', 'data_vg': 'ceph-f00e4ac9-9831-582f-92bc-f2b318630797'})  2025-06-03 15:28:49.504144 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2547461e-5dcb-5046-b3ed-0a182c83d3a8', 'data_vg': 'ceph-2547461e-5dcb-5046-b3ed-0a182c83d3a8'})  2025-06-03 15:28:49.504992 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:28:49.505629 | orchestrator | 2025-06-03 15:28:49.506579 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-06-03 15:28:49.506772 | orchestrator | Tuesday 03 June 2025 15:28:49 +0000 (0:00:00.136) 0:00:35.697 ********** 2025-06-03 15:28:49.625852 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:28:49.625936 | orchestrator | 2025-06-03 15:28:49.626054 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-06-03 15:28:49.626736 | orchestrator | Tuesday 03 June 2025 15:28:49 +0000 (0:00:00.123) 0:00:35.821 ********** 2025-06-03 15:28:49.759677 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f00e4ac9-9831-582f-92bc-f2b318630797', 'data_vg': 'ceph-f00e4ac9-9831-582f-92bc-f2b318630797'})  2025-06-03 15:28:49.759779 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2547461e-5dcb-5046-b3ed-0a182c83d3a8', 'data_vg': 'ceph-2547461e-5dcb-5046-b3ed-0a182c83d3a8'})  2025-06-03 15:28:49.760455 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:28:49.761012 | orchestrator | 2025-06-03 15:28:49.761621 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-06-03 15:28:49.762473 | orchestrator | Tuesday 03 June 2025 15:28:49 +0000 (0:00:00.133) 0:00:35.954 ********** 2025-06-03 15:28:49.893967 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f00e4ac9-9831-582f-92bc-f2b318630797', 'data_vg': 'ceph-f00e4ac9-9831-582f-92bc-f2b318630797'})  2025-06-03 15:28:49.894091 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2547461e-5dcb-5046-b3ed-0a182c83d3a8', 'data_vg': 'ceph-2547461e-5dcb-5046-b3ed-0a182c83d3a8'})  2025-06-03 15:28:49.894548 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:28:49.895032 | orchestrator | 2025-06-03 15:28:49.895458 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-06-03 15:28:49.897316 | orchestrator | Tuesday 03 June 2025 15:28:49 +0000 (0:00:00.133) 0:00:36.088 ********** 2025-06-03 15:28:50.042884 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f00e4ac9-9831-582f-92bc-f2b318630797', 'data_vg': 'ceph-f00e4ac9-9831-582f-92bc-f2b318630797'})  2025-06-03 15:28:50.044044 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2547461e-5dcb-5046-b3ed-0a182c83d3a8', 'data_vg': 'ceph-2547461e-5dcb-5046-b3ed-0a182c83d3a8'})  2025-06-03 15:28:50.044832 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:28:50.045713 | orchestrator | 2025-06-03 15:28:50.046364 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-06-03 15:28:50.046964 | orchestrator | Tuesday 03 June 2025 15:28:50 +0000 (0:00:00.150) 0:00:36.238 ********** 2025-06-03 15:28:50.157587 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:28:50.157763 | orchestrator | 2025-06-03 15:28:50.158895 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-06-03 15:28:50.160027 | orchestrator | Tuesday 03 June 2025 15:28:50 +0000 (0:00:00.114) 0:00:36.352 ********** 2025-06-03 15:28:50.288165 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:28:50.288441 | orchestrator | 2025-06-03 15:28:50.289582 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-06-03 15:28:50.290332 | orchestrator | Tuesday 03 June 2025 15:28:50 +0000 (0:00:00.131) 0:00:36.484 ********** 2025-06-03 15:28:50.393540 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:28:50.393706 | orchestrator | 2025-06-03 15:28:50.394133 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-06-03 15:28:50.394935 | orchestrator | Tuesday 03 June 2025 15:28:50 +0000 (0:00:00.105) 0:00:36.589 ********** 2025-06-03 15:28:50.525393 | orchestrator | ok: [testbed-node-4] => { 2025-06-03 15:28:50.526171 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-06-03 15:28:50.526350 | orchestrator | } 2025-06-03 15:28:50.527587 | orchestrator | 2025-06-03 15:28:50.529110 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-06-03 15:28:50.529423 | orchestrator | Tuesday 03 June 2025 15:28:50 +0000 (0:00:00.130) 0:00:36.720 ********** 2025-06-03 15:28:50.649445 | orchestrator | ok: [testbed-node-4] => { 2025-06-03 15:28:50.649724 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-06-03 15:28:50.650336 | orchestrator | } 2025-06-03 15:28:50.652905 | orchestrator | 2025-06-03 15:28:50.653725 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-06-03 15:28:50.653941 | orchestrator | Tuesday 03 June 2025 15:28:50 +0000 (0:00:00.123) 0:00:36.844 ********** 2025-06-03 15:28:50.798340 | orchestrator | ok: [testbed-node-4] => { 2025-06-03 15:28:50.798445 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-06-03 15:28:50.798461 | orchestrator | } 2025-06-03 15:28:50.798897 | orchestrator | 2025-06-03 15:28:50.799184 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-06-03 15:28:50.799688 | orchestrator | Tuesday 03 June 2025 15:28:50 +0000 (0:00:00.149) 0:00:36.994 ********** 2025-06-03 15:28:51.496740 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:28:51.498270 | orchestrator | 2025-06-03 15:28:51.498323 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-06-03 15:28:51.499569 | orchestrator | Tuesday 03 June 2025 15:28:51 +0000 (0:00:00.696) 0:00:37.690 ********** 2025-06-03 15:28:51.998890 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:28:51.999770 | orchestrator | 2025-06-03 15:28:52.000878 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-06-03 15:28:52.003247 | orchestrator | Tuesday 03 June 2025 15:28:51 +0000 (0:00:00.503) 0:00:38.194 ********** 2025-06-03 15:28:52.544451 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:28:52.544939 | orchestrator | 2025-06-03 15:28:52.545736 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-06-03 15:28:52.548008 | orchestrator | Tuesday 03 June 2025 15:28:52 +0000 (0:00:00.544) 0:00:38.738 ********** 2025-06-03 15:28:52.684999 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:28:52.685726 | orchestrator | 2025-06-03 15:28:52.686063 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-06-03 15:28:52.686865 | orchestrator | Tuesday 03 June 2025 15:28:52 +0000 (0:00:00.142) 0:00:38.880 ********** 2025-06-03 15:28:52.795434 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:28:52.796335 | orchestrator | 2025-06-03 15:28:52.797372 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-06-03 15:28:52.798113 | orchestrator | Tuesday 03 June 2025 15:28:52 +0000 (0:00:00.110) 0:00:38.991 ********** 2025-06-03 15:28:52.892304 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:28:52.892401 | orchestrator | 2025-06-03 15:28:52.892798 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-06-03 15:28:52.893213 | orchestrator | Tuesday 03 June 2025 15:28:52 +0000 (0:00:00.096) 0:00:39.087 ********** 2025-06-03 15:28:53.050239 | orchestrator | ok: [testbed-node-4] => { 2025-06-03 15:28:53.050337 | orchestrator |  "vgs_report": { 2025-06-03 15:28:53.051083 | orchestrator |  "vg": [] 2025-06-03 15:28:53.051831 | orchestrator |  } 2025-06-03 15:28:53.052441 | orchestrator | } 2025-06-03 15:28:53.054649 | orchestrator | 2025-06-03 15:28:53.054680 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-06-03 15:28:53.055552 | orchestrator | Tuesday 03 June 2025 15:28:53 +0000 (0:00:00.157) 0:00:39.245 ********** 2025-06-03 15:28:53.168699 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:28:53.168874 | orchestrator | 2025-06-03 15:28:53.169074 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-06-03 15:28:53.169391 | orchestrator | Tuesday 03 June 2025 15:28:53 +0000 (0:00:00.116) 0:00:39.362 ********** 2025-06-03 15:28:53.297403 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:28:53.297474 | orchestrator | 2025-06-03 15:28:53.297778 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-06-03 15:28:53.297788 | orchestrator | Tuesday 03 June 2025 15:28:53 +0000 (0:00:00.129) 0:00:39.492 ********** 2025-06-03 15:28:53.427676 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:28:53.427861 | orchestrator | 2025-06-03 15:28:53.429183 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-06-03 15:28:53.429938 | orchestrator | Tuesday 03 June 2025 15:28:53 +0000 (0:00:00.130) 0:00:39.622 ********** 2025-06-03 15:28:53.558226 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:28:53.558429 | orchestrator | 2025-06-03 15:28:53.559917 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-06-03 15:28:53.560165 | orchestrator | Tuesday 03 June 2025 15:28:53 +0000 (0:00:00.130) 0:00:39.753 ********** 2025-06-03 15:28:53.694868 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:28:53.695372 | orchestrator | 2025-06-03 15:28:53.696380 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-06-03 15:28:53.696984 | orchestrator | Tuesday 03 June 2025 15:28:53 +0000 (0:00:00.136) 0:00:39.889 ********** 2025-06-03 15:28:54.033050 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:28:54.033162 | orchestrator | 2025-06-03 15:28:54.033755 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-06-03 15:28:54.034082 | orchestrator | Tuesday 03 June 2025 15:28:54 +0000 (0:00:00.337) 0:00:40.227 ********** 2025-06-03 15:28:54.169876 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:28:54.170213 | orchestrator | 2025-06-03 15:28:54.171640 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-06-03 15:28:54.173715 | orchestrator | Tuesday 03 June 2025 15:28:54 +0000 (0:00:00.137) 0:00:40.364 ********** 2025-06-03 15:28:54.305661 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:28:54.306809 | orchestrator | 2025-06-03 15:28:54.307782 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-06-03 15:28:54.309740 | orchestrator | Tuesday 03 June 2025 15:28:54 +0000 (0:00:00.136) 0:00:40.501 ********** 2025-06-03 15:28:54.447863 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:28:54.449156 | orchestrator | 2025-06-03 15:28:54.450122 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-06-03 15:28:54.450607 | orchestrator | Tuesday 03 June 2025 15:28:54 +0000 (0:00:00.142) 0:00:40.643 ********** 2025-06-03 15:28:54.580426 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:28:54.581456 | orchestrator | 2025-06-03 15:28:54.582908 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-06-03 15:28:54.583426 | orchestrator | Tuesday 03 June 2025 15:28:54 +0000 (0:00:00.131) 0:00:40.774 ********** 2025-06-03 15:28:54.720474 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:28:54.721693 | orchestrator | 2025-06-03 15:28:54.723038 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-06-03 15:28:54.723625 | orchestrator | Tuesday 03 June 2025 15:28:54 +0000 (0:00:00.141) 0:00:40.915 ********** 2025-06-03 15:28:54.848820 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:28:54.849198 | orchestrator | 2025-06-03 15:28:54.850274 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-06-03 15:28:54.850895 | orchestrator | Tuesday 03 June 2025 15:28:54 +0000 (0:00:00.128) 0:00:41.044 ********** 2025-06-03 15:28:54.980776 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:28:54.980862 | orchestrator | 2025-06-03 15:28:54.981834 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-06-03 15:28:54.982991 | orchestrator | Tuesday 03 June 2025 15:28:54 +0000 (0:00:00.130) 0:00:41.175 ********** 2025-06-03 15:28:55.116418 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:28:55.117106 | orchestrator | 2025-06-03 15:28:55.117884 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-06-03 15:28:55.118426 | orchestrator | Tuesday 03 June 2025 15:28:55 +0000 (0:00:00.136) 0:00:41.311 ********** 2025-06-03 15:28:55.278476 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f00e4ac9-9831-582f-92bc-f2b318630797', 'data_vg': 'ceph-f00e4ac9-9831-582f-92bc-f2b318630797'})  2025-06-03 15:28:55.278783 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2547461e-5dcb-5046-b3ed-0a182c83d3a8', 'data_vg': 'ceph-2547461e-5dcb-5046-b3ed-0a182c83d3a8'})  2025-06-03 15:28:55.279986 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:28:55.281012 | orchestrator | 2025-06-03 15:28:55.281861 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-06-03 15:28:55.282667 | orchestrator | Tuesday 03 June 2025 15:28:55 +0000 (0:00:00.160) 0:00:41.472 ********** 2025-06-03 15:28:55.423464 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f00e4ac9-9831-582f-92bc-f2b318630797', 'data_vg': 'ceph-f00e4ac9-9831-582f-92bc-f2b318630797'})  2025-06-03 15:28:55.423655 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2547461e-5dcb-5046-b3ed-0a182c83d3a8', 'data_vg': 'ceph-2547461e-5dcb-5046-b3ed-0a182c83d3a8'})  2025-06-03 15:28:55.424670 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:28:55.425621 | orchestrator | 2025-06-03 15:28:55.427699 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-06-03 15:28:55.428717 | orchestrator | Tuesday 03 June 2025 15:28:55 +0000 (0:00:00.146) 0:00:41.618 ********** 2025-06-03 15:28:55.585567 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f00e4ac9-9831-582f-92bc-f2b318630797', 'data_vg': 'ceph-f00e4ac9-9831-582f-92bc-f2b318630797'})  2025-06-03 15:28:55.585727 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2547461e-5dcb-5046-b3ed-0a182c83d3a8', 'data_vg': 'ceph-2547461e-5dcb-5046-b3ed-0a182c83d3a8'})  2025-06-03 15:28:55.587615 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:28:55.588472 | orchestrator | 2025-06-03 15:28:55.589508 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-06-03 15:28:55.590252 | orchestrator | Tuesday 03 June 2025 15:28:55 +0000 (0:00:00.161) 0:00:41.779 ********** 2025-06-03 15:28:55.953398 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f00e4ac9-9831-582f-92bc-f2b318630797', 'data_vg': 'ceph-f00e4ac9-9831-582f-92bc-f2b318630797'})  2025-06-03 15:28:55.953629 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2547461e-5dcb-5046-b3ed-0a182c83d3a8', 'data_vg': 'ceph-2547461e-5dcb-5046-b3ed-0a182c83d3a8'})  2025-06-03 15:28:55.954287 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:28:55.955072 | orchestrator | 2025-06-03 15:28:55.955350 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-06-03 15:28:55.957266 | orchestrator | Tuesday 03 June 2025 15:28:55 +0000 (0:00:00.368) 0:00:42.147 ********** 2025-06-03 15:28:56.105701 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f00e4ac9-9831-582f-92bc-f2b318630797', 'data_vg': 'ceph-f00e4ac9-9831-582f-92bc-f2b318630797'})  2025-06-03 15:28:56.106995 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2547461e-5dcb-5046-b3ed-0a182c83d3a8', 'data_vg': 'ceph-2547461e-5dcb-5046-b3ed-0a182c83d3a8'})  2025-06-03 15:28:56.108171 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:28:56.108805 | orchestrator | 2025-06-03 15:28:56.109899 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-06-03 15:28:56.110791 | orchestrator | Tuesday 03 June 2025 15:28:56 +0000 (0:00:00.152) 0:00:42.300 ********** 2025-06-03 15:28:56.250444 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f00e4ac9-9831-582f-92bc-f2b318630797', 'data_vg': 'ceph-f00e4ac9-9831-582f-92bc-f2b318630797'})  2025-06-03 15:28:56.250773 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2547461e-5dcb-5046-b3ed-0a182c83d3a8', 'data_vg': 'ceph-2547461e-5dcb-5046-b3ed-0a182c83d3a8'})  2025-06-03 15:28:56.251463 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:28:56.252842 | orchestrator | 2025-06-03 15:28:56.253751 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-06-03 15:28:56.255000 | orchestrator | Tuesday 03 June 2025 15:28:56 +0000 (0:00:00.144) 0:00:42.444 ********** 2025-06-03 15:28:56.400570 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f00e4ac9-9831-582f-92bc-f2b318630797', 'data_vg': 'ceph-f00e4ac9-9831-582f-92bc-f2b318630797'})  2025-06-03 15:28:56.400669 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2547461e-5dcb-5046-b3ed-0a182c83d3a8', 'data_vg': 'ceph-2547461e-5dcb-5046-b3ed-0a182c83d3a8'})  2025-06-03 15:28:56.401292 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:28:56.402609 | orchestrator | 2025-06-03 15:28:56.403271 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-06-03 15:28:56.403882 | orchestrator | Tuesday 03 June 2025 15:28:56 +0000 (0:00:00.149) 0:00:42.594 ********** 2025-06-03 15:28:56.578935 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f00e4ac9-9831-582f-92bc-f2b318630797', 'data_vg': 'ceph-f00e4ac9-9831-582f-92bc-f2b318630797'})  2025-06-03 15:28:56.579017 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2547461e-5dcb-5046-b3ed-0a182c83d3a8', 'data_vg': 'ceph-2547461e-5dcb-5046-b3ed-0a182c83d3a8'})  2025-06-03 15:28:56.580287 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:28:56.581697 | orchestrator | 2025-06-03 15:28:56.583053 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-06-03 15:28:56.583979 | orchestrator | Tuesday 03 June 2025 15:28:56 +0000 (0:00:00.177) 0:00:42.771 ********** 2025-06-03 15:28:57.112734 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:28:57.113928 | orchestrator | 2025-06-03 15:28:57.114746 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-06-03 15:28:57.115239 | orchestrator | Tuesday 03 June 2025 15:28:57 +0000 (0:00:00.536) 0:00:43.307 ********** 2025-06-03 15:28:57.634350 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:28:57.634754 | orchestrator | 2025-06-03 15:28:57.636135 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-06-03 15:28:57.636932 | orchestrator | Tuesday 03 June 2025 15:28:57 +0000 (0:00:00.520) 0:00:43.827 ********** 2025-06-03 15:28:57.788788 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:28:57.788871 | orchestrator | 2025-06-03 15:28:57.788941 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-06-03 15:28:57.789244 | orchestrator | Tuesday 03 June 2025 15:28:57 +0000 (0:00:00.156) 0:00:43.984 ********** 2025-06-03 15:28:57.959829 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-2547461e-5dcb-5046-b3ed-0a182c83d3a8', 'vg_name': 'ceph-2547461e-5dcb-5046-b3ed-0a182c83d3a8'}) 2025-06-03 15:28:57.960580 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-f00e4ac9-9831-582f-92bc-f2b318630797', 'vg_name': 'ceph-f00e4ac9-9831-582f-92bc-f2b318630797'}) 2025-06-03 15:28:57.961099 | orchestrator | 2025-06-03 15:28:57.961188 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-06-03 15:28:57.961989 | orchestrator | Tuesday 03 June 2025 15:28:57 +0000 (0:00:00.169) 0:00:44.153 ********** 2025-06-03 15:28:58.134756 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f00e4ac9-9831-582f-92bc-f2b318630797', 'data_vg': 'ceph-f00e4ac9-9831-582f-92bc-f2b318630797'})  2025-06-03 15:28:58.135464 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2547461e-5dcb-5046-b3ed-0a182c83d3a8', 'data_vg': 'ceph-2547461e-5dcb-5046-b3ed-0a182c83d3a8'})  2025-06-03 15:28:58.136409 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:28:58.136946 | orchestrator | 2025-06-03 15:28:58.138658 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-06-03 15:28:58.138683 | orchestrator | Tuesday 03 June 2025 15:28:58 +0000 (0:00:00.176) 0:00:44.329 ********** 2025-06-03 15:28:58.289102 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f00e4ac9-9831-582f-92bc-f2b318630797', 'data_vg': 'ceph-f00e4ac9-9831-582f-92bc-f2b318630797'})  2025-06-03 15:28:58.289930 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2547461e-5dcb-5046-b3ed-0a182c83d3a8', 'data_vg': 'ceph-2547461e-5dcb-5046-b3ed-0a182c83d3a8'})  2025-06-03 15:28:58.290639 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:28:58.291175 | orchestrator | 2025-06-03 15:28:58.291806 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-06-03 15:28:58.292215 | orchestrator | Tuesday 03 June 2025 15:28:58 +0000 (0:00:00.152) 0:00:44.482 ********** 2025-06-03 15:28:58.458663 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f00e4ac9-9831-582f-92bc-f2b318630797', 'data_vg': 'ceph-f00e4ac9-9831-582f-92bc-f2b318630797'})  2025-06-03 15:28:58.459770 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2547461e-5dcb-5046-b3ed-0a182c83d3a8', 'data_vg': 'ceph-2547461e-5dcb-5046-b3ed-0a182c83d3a8'})  2025-06-03 15:28:58.461642 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:28:58.461677 | orchestrator | 2025-06-03 15:28:58.462067 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-06-03 15:28:58.462727 | orchestrator | Tuesday 03 June 2025 15:28:58 +0000 (0:00:00.170) 0:00:44.652 ********** 2025-06-03 15:28:58.950637 | orchestrator | ok: [testbed-node-4] => { 2025-06-03 15:28:58.950722 | orchestrator |  "lvm_report": { 2025-06-03 15:28:58.952064 | orchestrator |  "lv": [ 2025-06-03 15:28:58.952635 | orchestrator |  { 2025-06-03 15:28:58.953637 | orchestrator |  "lv_name": "osd-block-2547461e-5dcb-5046-b3ed-0a182c83d3a8", 2025-06-03 15:28:58.954291 | orchestrator |  "vg_name": "ceph-2547461e-5dcb-5046-b3ed-0a182c83d3a8" 2025-06-03 15:28:58.955008 | orchestrator |  }, 2025-06-03 15:28:58.955683 | orchestrator |  { 2025-06-03 15:28:58.956338 | orchestrator |  "lv_name": "osd-block-f00e4ac9-9831-582f-92bc-f2b318630797", 2025-06-03 15:28:58.956994 | orchestrator |  "vg_name": "ceph-f00e4ac9-9831-582f-92bc-f2b318630797" 2025-06-03 15:28:58.957936 | orchestrator |  } 2025-06-03 15:28:58.958801 | orchestrator |  ], 2025-06-03 15:28:58.959954 | orchestrator |  "pv": [ 2025-06-03 15:28:58.959987 | orchestrator |  { 2025-06-03 15:28:58.960414 | orchestrator |  "pv_name": "/dev/sdb", 2025-06-03 15:28:58.961199 | orchestrator |  "vg_name": "ceph-f00e4ac9-9831-582f-92bc-f2b318630797" 2025-06-03 15:28:58.961468 | orchestrator |  }, 2025-06-03 15:28:58.962059 | orchestrator |  { 2025-06-03 15:28:58.962370 | orchestrator |  "pv_name": "/dev/sdc", 2025-06-03 15:28:58.962785 | orchestrator |  "vg_name": "ceph-2547461e-5dcb-5046-b3ed-0a182c83d3a8" 2025-06-03 15:28:58.963154 | orchestrator |  } 2025-06-03 15:28:58.963538 | orchestrator |  ] 2025-06-03 15:28:58.963899 | orchestrator |  } 2025-06-03 15:28:58.964294 | orchestrator | } 2025-06-03 15:28:58.964698 | orchestrator | 2025-06-03 15:28:58.965056 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-06-03 15:28:58.965457 | orchestrator | 2025-06-03 15:28:58.966075 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-06-03 15:28:58.966201 | orchestrator | Tuesday 03 June 2025 15:28:58 +0000 (0:00:00.490) 0:00:45.143 ********** 2025-06-03 15:28:59.202599 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-06-03 15:28:59.203177 | orchestrator | 2025-06-03 15:28:59.203618 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-06-03 15:28:59.204796 | orchestrator | Tuesday 03 June 2025 15:28:59 +0000 (0:00:00.253) 0:00:45.397 ********** 2025-06-03 15:28:59.468189 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:28:59.468888 | orchestrator | 2025-06-03 15:28:59.470011 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:28:59.470942 | orchestrator | Tuesday 03 June 2025 15:28:59 +0000 (0:00:00.262) 0:00:45.660 ********** 2025-06-03 15:28:59.872389 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-06-03 15:28:59.873815 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-06-03 15:28:59.874681 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-06-03 15:28:59.876691 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-06-03 15:28:59.877460 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-06-03 15:28:59.878160 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-06-03 15:28:59.878689 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-06-03 15:28:59.879060 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-06-03 15:28:59.879786 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-06-03 15:28:59.880628 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-06-03 15:28:59.881261 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-06-03 15:28:59.881680 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-06-03 15:28:59.882011 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-06-03 15:28:59.882378 | orchestrator | 2025-06-03 15:28:59.882832 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:28:59.883167 | orchestrator | Tuesday 03 June 2025 15:28:59 +0000 (0:00:00.407) 0:00:46.067 ********** 2025-06-03 15:29:00.064163 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:29:00.064716 | orchestrator | 2025-06-03 15:29:00.065128 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:29:00.066956 | orchestrator | Tuesday 03 June 2025 15:29:00 +0000 (0:00:00.191) 0:00:46.258 ********** 2025-06-03 15:29:00.272953 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:29:00.273156 | orchestrator | 2025-06-03 15:29:00.273741 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:29:00.274295 | orchestrator | Tuesday 03 June 2025 15:29:00 +0000 (0:00:00.209) 0:00:46.468 ********** 2025-06-03 15:29:00.480248 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:29:00.480526 | orchestrator | 2025-06-03 15:29:00.482239 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:29:00.482279 | orchestrator | Tuesday 03 June 2025 15:29:00 +0000 (0:00:00.204) 0:00:46.672 ********** 2025-06-03 15:29:00.677600 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:29:00.677674 | orchestrator | 2025-06-03 15:29:00.677733 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:29:00.678940 | orchestrator | Tuesday 03 June 2025 15:29:00 +0000 (0:00:00.199) 0:00:46.872 ********** 2025-06-03 15:29:00.886244 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:29:00.886712 | orchestrator | 2025-06-03 15:29:00.887633 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:29:00.888148 | orchestrator | Tuesday 03 June 2025 15:29:00 +0000 (0:00:00.204) 0:00:47.077 ********** 2025-06-03 15:29:01.569763 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:29:01.570732 | orchestrator | 2025-06-03 15:29:01.571567 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:29:01.572195 | orchestrator | Tuesday 03 June 2025 15:29:01 +0000 (0:00:00.687) 0:00:47.764 ********** 2025-06-03 15:29:01.782805 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:29:01.783617 | orchestrator | 2025-06-03 15:29:01.784246 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:29:01.785096 | orchestrator | Tuesday 03 June 2025 15:29:01 +0000 (0:00:00.212) 0:00:47.977 ********** 2025-06-03 15:29:01.984190 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:29:01.984340 | orchestrator | 2025-06-03 15:29:01.985046 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:29:01.985972 | orchestrator | Tuesday 03 June 2025 15:29:01 +0000 (0:00:00.200) 0:00:48.178 ********** 2025-06-03 15:29:02.419264 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_ec1efc19-1b1e-4f39-8db8-97e27f5004aa) 2025-06-03 15:29:02.419442 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_ec1efc19-1b1e-4f39-8db8-97e27f5004aa) 2025-06-03 15:29:02.421232 | orchestrator | 2025-06-03 15:29:02.422256 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:29:02.423372 | orchestrator | Tuesday 03 June 2025 15:29:02 +0000 (0:00:00.434) 0:00:48.612 ********** 2025-06-03 15:29:02.850614 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_fa411336-a154-4770-b6c1-ce8fec2c95f2) 2025-06-03 15:29:02.852484 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_fa411336-a154-4770-b6c1-ce8fec2c95f2) 2025-06-03 15:29:02.853684 | orchestrator | 2025-06-03 15:29:02.853804 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:29:02.854659 | orchestrator | Tuesday 03 June 2025 15:29:02 +0000 (0:00:00.432) 0:00:49.045 ********** 2025-06-03 15:29:03.265016 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_ffe2a0ca-5a38-47a9-803d-00b473435346) 2025-06-03 15:29:03.265717 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_ffe2a0ca-5a38-47a9-803d-00b473435346) 2025-06-03 15:29:03.266282 | orchestrator | 2025-06-03 15:29:03.267284 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:29:03.269240 | orchestrator | Tuesday 03 June 2025 15:29:03 +0000 (0:00:00.414) 0:00:49.459 ********** 2025-06-03 15:29:03.704702 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_ed092372-9559-4d48-8a48-c44bdb9ee908) 2025-06-03 15:29:03.707883 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_ed092372-9559-4d48-8a48-c44bdb9ee908) 2025-06-03 15:29:03.707918 | orchestrator | 2025-06-03 15:29:03.707932 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-03 15:29:03.708307 | orchestrator | Tuesday 03 June 2025 15:29:03 +0000 (0:00:00.438) 0:00:49.898 ********** 2025-06-03 15:29:04.021468 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-06-03 15:29:04.022347 | orchestrator | 2025-06-03 15:29:04.022950 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:29:04.024071 | orchestrator | Tuesday 03 June 2025 15:29:04 +0000 (0:00:00.317) 0:00:50.216 ********** 2025-06-03 15:29:04.435073 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-06-03 15:29:04.436238 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-06-03 15:29:04.438176 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-06-03 15:29:04.438585 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-06-03 15:29:04.439797 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-06-03 15:29:04.440532 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-06-03 15:29:04.441398 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-06-03 15:29:04.441794 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-06-03 15:29:04.442259 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-06-03 15:29:04.442770 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-06-03 15:29:04.443220 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-06-03 15:29:04.443949 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-06-03 15:29:04.444184 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-06-03 15:29:04.444611 | orchestrator | 2025-06-03 15:29:04.445049 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:29:04.445405 | orchestrator | Tuesday 03 June 2025 15:29:04 +0000 (0:00:00.412) 0:00:50.629 ********** 2025-06-03 15:29:04.633232 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:29:04.633364 | orchestrator | 2025-06-03 15:29:04.633561 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:29:04.634150 | orchestrator | Tuesday 03 June 2025 15:29:04 +0000 (0:00:00.198) 0:00:50.827 ********** 2025-06-03 15:29:04.829488 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:29:04.830013 | orchestrator | 2025-06-03 15:29:04.830935 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:29:04.831900 | orchestrator | Tuesday 03 June 2025 15:29:04 +0000 (0:00:00.196) 0:00:51.024 ********** 2025-06-03 15:29:05.484710 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:29:05.484838 | orchestrator | 2025-06-03 15:29:05.484950 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:29:05.485435 | orchestrator | Tuesday 03 June 2025 15:29:05 +0000 (0:00:00.654) 0:00:51.678 ********** 2025-06-03 15:29:05.697901 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:29:05.698123 | orchestrator | 2025-06-03 15:29:05.698600 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:29:05.698889 | orchestrator | Tuesday 03 June 2025 15:29:05 +0000 (0:00:00.214) 0:00:51.893 ********** 2025-06-03 15:29:05.909359 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:29:05.909761 | orchestrator | 2025-06-03 15:29:05.911337 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:29:05.911388 | orchestrator | Tuesday 03 June 2025 15:29:05 +0000 (0:00:00.209) 0:00:52.102 ********** 2025-06-03 15:29:06.097817 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:29:06.097921 | orchestrator | 2025-06-03 15:29:06.098254 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:29:06.099059 | orchestrator | Tuesday 03 June 2025 15:29:06 +0000 (0:00:00.188) 0:00:52.291 ********** 2025-06-03 15:29:06.292679 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:29:06.293608 | orchestrator | 2025-06-03 15:29:06.293812 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:29:06.298688 | orchestrator | Tuesday 03 June 2025 15:29:06 +0000 (0:00:00.195) 0:00:52.487 ********** 2025-06-03 15:29:06.491621 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:29:06.493954 | orchestrator | 2025-06-03 15:29:06.493982 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:29:06.494222 | orchestrator | Tuesday 03 June 2025 15:29:06 +0000 (0:00:00.197) 0:00:52.685 ********** 2025-06-03 15:29:07.131659 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-06-03 15:29:07.134103 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-06-03 15:29:07.135186 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-06-03 15:29:07.135920 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-06-03 15:29:07.137013 | orchestrator | 2025-06-03 15:29:07.137597 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:29:07.138465 | orchestrator | Tuesday 03 June 2025 15:29:07 +0000 (0:00:00.640) 0:00:53.326 ********** 2025-06-03 15:29:07.328323 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:29:07.328819 | orchestrator | 2025-06-03 15:29:07.331334 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:29:07.332114 | orchestrator | Tuesday 03 June 2025 15:29:07 +0000 (0:00:00.196) 0:00:53.523 ********** 2025-06-03 15:29:07.519682 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:29:07.520429 | orchestrator | 2025-06-03 15:29:07.521611 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:29:07.522549 | orchestrator | Tuesday 03 June 2025 15:29:07 +0000 (0:00:00.190) 0:00:53.713 ********** 2025-06-03 15:29:07.716049 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:29:07.716242 | orchestrator | 2025-06-03 15:29:07.717267 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-03 15:29:07.717817 | orchestrator | Tuesday 03 June 2025 15:29:07 +0000 (0:00:00.197) 0:00:53.910 ********** 2025-06-03 15:29:07.915262 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:29:07.916131 | orchestrator | 2025-06-03 15:29:07.916543 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-06-03 15:29:07.917897 | orchestrator | Tuesday 03 June 2025 15:29:07 +0000 (0:00:00.199) 0:00:54.110 ********** 2025-06-03 15:29:08.052217 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:29:08.052972 | orchestrator | 2025-06-03 15:29:08.054371 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-06-03 15:29:08.055312 | orchestrator | Tuesday 03 June 2025 15:29:08 +0000 (0:00:00.135) 0:00:54.245 ********** 2025-06-03 15:29:08.453726 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '610c71bb-335d-5813-8d53-12327c30775e'}}) 2025-06-03 15:29:08.454655 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ae8860ce-b651-5449-9c0b-e6c018225b94'}}) 2025-06-03 15:29:08.456752 | orchestrator | 2025-06-03 15:29:08.457754 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-06-03 15:29:08.459008 | orchestrator | Tuesday 03 June 2025 15:29:08 +0000 (0:00:00.402) 0:00:54.648 ********** 2025-06-03 15:29:10.288547 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-610c71bb-335d-5813-8d53-12327c30775e', 'data_vg': 'ceph-610c71bb-335d-5813-8d53-12327c30775e'}) 2025-06-03 15:29:10.289146 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-ae8860ce-b651-5449-9c0b-e6c018225b94', 'data_vg': 'ceph-ae8860ce-b651-5449-9c0b-e6c018225b94'}) 2025-06-03 15:29:10.290525 | orchestrator | 2025-06-03 15:29:10.291662 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-06-03 15:29:10.292078 | orchestrator | Tuesday 03 June 2025 15:29:10 +0000 (0:00:01.831) 0:00:56.480 ********** 2025-06-03 15:29:10.445163 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-610c71bb-335d-5813-8d53-12327c30775e', 'data_vg': 'ceph-610c71bb-335d-5813-8d53-12327c30775e'})  2025-06-03 15:29:10.446046 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ae8860ce-b651-5449-9c0b-e6c018225b94', 'data_vg': 'ceph-ae8860ce-b651-5449-9c0b-e6c018225b94'})  2025-06-03 15:29:10.446938 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:29:10.447362 | orchestrator | 2025-06-03 15:29:10.448189 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-06-03 15:29:10.449919 | orchestrator | Tuesday 03 June 2025 15:29:10 +0000 (0:00:00.159) 0:00:56.640 ********** 2025-06-03 15:29:11.771540 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-610c71bb-335d-5813-8d53-12327c30775e', 'data_vg': 'ceph-610c71bb-335d-5813-8d53-12327c30775e'}) 2025-06-03 15:29:11.771729 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-ae8860ce-b651-5449-9c0b-e6c018225b94', 'data_vg': 'ceph-ae8860ce-b651-5449-9c0b-e6c018225b94'}) 2025-06-03 15:29:11.772150 | orchestrator | 2025-06-03 15:29:11.772671 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-06-03 15:29:11.773099 | orchestrator | Tuesday 03 June 2025 15:29:11 +0000 (0:00:01.324) 0:00:57.964 ********** 2025-06-03 15:29:11.933412 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-610c71bb-335d-5813-8d53-12327c30775e', 'data_vg': 'ceph-610c71bb-335d-5813-8d53-12327c30775e'})  2025-06-03 15:29:11.934727 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ae8860ce-b651-5449-9c0b-e6c018225b94', 'data_vg': 'ceph-ae8860ce-b651-5449-9c0b-e6c018225b94'})  2025-06-03 15:29:11.936380 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:29:11.936571 | orchestrator | 2025-06-03 15:29:11.937749 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-06-03 15:29:11.938835 | orchestrator | Tuesday 03 June 2025 15:29:11 +0000 (0:00:00.163) 0:00:58.128 ********** 2025-06-03 15:29:12.060142 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:29:12.060285 | orchestrator | 2025-06-03 15:29:12.061095 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-06-03 15:29:12.061415 | orchestrator | Tuesday 03 June 2025 15:29:12 +0000 (0:00:00.127) 0:00:58.255 ********** 2025-06-03 15:29:12.216412 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-610c71bb-335d-5813-8d53-12327c30775e', 'data_vg': 'ceph-610c71bb-335d-5813-8d53-12327c30775e'})  2025-06-03 15:29:12.216495 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ae8860ce-b651-5449-9c0b-e6c018225b94', 'data_vg': 'ceph-ae8860ce-b651-5449-9c0b-e6c018225b94'})  2025-06-03 15:29:12.216971 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:29:12.217471 | orchestrator | 2025-06-03 15:29:12.218182 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-06-03 15:29:12.218443 | orchestrator | Tuesday 03 June 2025 15:29:12 +0000 (0:00:00.155) 0:00:58.411 ********** 2025-06-03 15:29:12.359096 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:29:12.360147 | orchestrator | 2025-06-03 15:29:12.361159 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-06-03 15:29:12.362765 | orchestrator | Tuesday 03 June 2025 15:29:12 +0000 (0:00:00.142) 0:00:58.554 ********** 2025-06-03 15:29:12.508989 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-610c71bb-335d-5813-8d53-12327c30775e', 'data_vg': 'ceph-610c71bb-335d-5813-8d53-12327c30775e'})  2025-06-03 15:29:12.509923 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ae8860ce-b651-5449-9c0b-e6c018225b94', 'data_vg': 'ceph-ae8860ce-b651-5449-9c0b-e6c018225b94'})  2025-06-03 15:29:12.511061 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:29:12.511979 | orchestrator | 2025-06-03 15:29:12.512758 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-06-03 15:29:12.513653 | orchestrator | Tuesday 03 June 2025 15:29:12 +0000 (0:00:00.150) 0:00:58.704 ********** 2025-06-03 15:29:12.644337 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:29:12.645113 | orchestrator | 2025-06-03 15:29:12.645754 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-06-03 15:29:12.646702 | orchestrator | Tuesday 03 June 2025 15:29:12 +0000 (0:00:00.135) 0:00:58.839 ********** 2025-06-03 15:29:12.800955 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-610c71bb-335d-5813-8d53-12327c30775e', 'data_vg': 'ceph-610c71bb-335d-5813-8d53-12327c30775e'})  2025-06-03 15:29:12.801857 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ae8860ce-b651-5449-9c0b-e6c018225b94', 'data_vg': 'ceph-ae8860ce-b651-5449-9c0b-e6c018225b94'})  2025-06-03 15:29:12.803526 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:29:12.804443 | orchestrator | 2025-06-03 15:29:12.805409 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-06-03 15:29:12.806075 | orchestrator | Tuesday 03 June 2025 15:29:12 +0000 (0:00:00.155) 0:00:58.995 ********** 2025-06-03 15:29:12.927850 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:29:12.928130 | orchestrator | 2025-06-03 15:29:12.928806 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-06-03 15:29:12.929256 | orchestrator | Tuesday 03 June 2025 15:29:12 +0000 (0:00:00.128) 0:00:59.123 ********** 2025-06-03 15:29:13.282772 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-610c71bb-335d-5813-8d53-12327c30775e', 'data_vg': 'ceph-610c71bb-335d-5813-8d53-12327c30775e'})  2025-06-03 15:29:13.283161 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ae8860ce-b651-5449-9c0b-e6c018225b94', 'data_vg': 'ceph-ae8860ce-b651-5449-9c0b-e6c018225b94'})  2025-06-03 15:29:13.285051 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:29:13.286452 | orchestrator | 2025-06-03 15:29:13.287275 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-06-03 15:29:13.288882 | orchestrator | Tuesday 03 June 2025 15:29:13 +0000 (0:00:00.352) 0:00:59.476 ********** 2025-06-03 15:29:13.447296 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-610c71bb-335d-5813-8d53-12327c30775e', 'data_vg': 'ceph-610c71bb-335d-5813-8d53-12327c30775e'})  2025-06-03 15:29:13.449518 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ae8860ce-b651-5449-9c0b-e6c018225b94', 'data_vg': 'ceph-ae8860ce-b651-5449-9c0b-e6c018225b94'})  2025-06-03 15:29:13.451187 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:29:13.451275 | orchestrator | 2025-06-03 15:29:13.451734 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-06-03 15:29:13.452707 | orchestrator | Tuesday 03 June 2025 15:29:13 +0000 (0:00:00.166) 0:00:59.642 ********** 2025-06-03 15:29:13.594386 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-610c71bb-335d-5813-8d53-12327c30775e', 'data_vg': 'ceph-610c71bb-335d-5813-8d53-12327c30775e'})  2025-06-03 15:29:13.594699 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ae8860ce-b651-5449-9c0b-e6c018225b94', 'data_vg': 'ceph-ae8860ce-b651-5449-9c0b-e6c018225b94'})  2025-06-03 15:29:13.595579 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:29:13.596834 | orchestrator | 2025-06-03 15:29:13.598630 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-06-03 15:29:13.598672 | orchestrator | Tuesday 03 June 2025 15:29:13 +0000 (0:00:00.147) 0:00:59.789 ********** 2025-06-03 15:29:13.719273 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:29:13.719867 | orchestrator | 2025-06-03 15:29:13.720567 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-06-03 15:29:13.720951 | orchestrator | Tuesday 03 June 2025 15:29:13 +0000 (0:00:00.125) 0:00:59.915 ********** 2025-06-03 15:29:13.846123 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:29:13.846306 | orchestrator | 2025-06-03 15:29:13.847068 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-06-03 15:29:13.847811 | orchestrator | Tuesday 03 June 2025 15:29:13 +0000 (0:00:00.126) 0:01:00.041 ********** 2025-06-03 15:29:13.987043 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:29:13.987465 | orchestrator | 2025-06-03 15:29:13.990078 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-06-03 15:29:13.990876 | orchestrator | Tuesday 03 June 2025 15:29:13 +0000 (0:00:00.138) 0:01:00.179 ********** 2025-06-03 15:29:14.134002 | orchestrator | ok: [testbed-node-5] => { 2025-06-03 15:29:14.134138 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-06-03 15:29:14.134153 | orchestrator | } 2025-06-03 15:29:14.134629 | orchestrator | 2025-06-03 15:29:14.135338 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-06-03 15:29:14.135738 | orchestrator | Tuesday 03 June 2025 15:29:14 +0000 (0:00:00.147) 0:01:00.327 ********** 2025-06-03 15:29:14.281416 | orchestrator | ok: [testbed-node-5] => { 2025-06-03 15:29:14.281466 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-06-03 15:29:14.281471 | orchestrator | } 2025-06-03 15:29:14.282094 | orchestrator | 2025-06-03 15:29:14.282587 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-06-03 15:29:14.285726 | orchestrator | Tuesday 03 June 2025 15:29:14 +0000 (0:00:00.148) 0:01:00.475 ********** 2025-06-03 15:29:14.425389 | orchestrator | ok: [testbed-node-5] => { 2025-06-03 15:29:14.425869 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-06-03 15:29:14.426953 | orchestrator | } 2025-06-03 15:29:14.429061 | orchestrator | 2025-06-03 15:29:14.429111 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-06-03 15:29:14.429608 | orchestrator | Tuesday 03 June 2025 15:29:14 +0000 (0:00:00.144) 0:01:00.620 ********** 2025-06-03 15:29:14.916325 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:29:14.916585 | orchestrator | 2025-06-03 15:29:14.916598 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-06-03 15:29:14.917221 | orchestrator | Tuesday 03 June 2025 15:29:14 +0000 (0:00:00.491) 0:01:01.112 ********** 2025-06-03 15:29:15.406907 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:29:15.407408 | orchestrator | 2025-06-03 15:29:15.409087 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-06-03 15:29:15.409232 | orchestrator | Tuesday 03 June 2025 15:29:15 +0000 (0:00:00.490) 0:01:01.602 ********** 2025-06-03 15:29:15.894430 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:29:15.894613 | orchestrator | 2025-06-03 15:29:15.894998 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-06-03 15:29:15.895642 | orchestrator | Tuesday 03 June 2025 15:29:15 +0000 (0:00:00.487) 0:01:02.090 ********** 2025-06-03 15:29:16.160846 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:29:16.161268 | orchestrator | 2025-06-03 15:29:16.162983 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-06-03 15:29:16.163129 | orchestrator | Tuesday 03 June 2025 15:29:16 +0000 (0:00:00.266) 0:01:02.356 ********** 2025-06-03 15:29:16.258002 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:29:16.258085 | orchestrator | 2025-06-03 15:29:16.258878 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-06-03 15:29:16.259705 | orchestrator | Tuesday 03 June 2025 15:29:16 +0000 (0:00:00.096) 0:01:02.453 ********** 2025-06-03 15:29:16.344781 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:29:16.346095 | orchestrator | 2025-06-03 15:29:16.346840 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-06-03 15:29:16.347559 | orchestrator | Tuesday 03 June 2025 15:29:16 +0000 (0:00:00.087) 0:01:02.540 ********** 2025-06-03 15:29:16.476858 | orchestrator | ok: [testbed-node-5] => { 2025-06-03 15:29:16.477551 | orchestrator |  "vgs_report": { 2025-06-03 15:29:16.478300 | orchestrator |  "vg": [] 2025-06-03 15:29:16.479376 | orchestrator |  } 2025-06-03 15:29:16.480154 | orchestrator | } 2025-06-03 15:29:16.480943 | orchestrator | 2025-06-03 15:29:16.481741 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-06-03 15:29:16.482525 | orchestrator | Tuesday 03 June 2025 15:29:16 +0000 (0:00:00.131) 0:01:02.672 ********** 2025-06-03 15:29:16.601497 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:29:16.601702 | orchestrator | 2025-06-03 15:29:16.602453 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-06-03 15:29:16.603302 | orchestrator | Tuesday 03 June 2025 15:29:16 +0000 (0:00:00.124) 0:01:02.796 ********** 2025-06-03 15:29:16.724176 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:29:16.724409 | orchestrator | 2025-06-03 15:29:16.724439 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-06-03 15:29:16.724886 | orchestrator | Tuesday 03 June 2025 15:29:16 +0000 (0:00:00.123) 0:01:02.920 ********** 2025-06-03 15:29:16.844087 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:29:16.844221 | orchestrator | 2025-06-03 15:29:16.844954 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-06-03 15:29:16.845700 | orchestrator | Tuesday 03 June 2025 15:29:16 +0000 (0:00:00.119) 0:01:03.039 ********** 2025-06-03 15:29:16.969648 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:29:16.969918 | orchestrator | 2025-06-03 15:29:16.969978 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-06-03 15:29:16.970002 | orchestrator | Tuesday 03 June 2025 15:29:16 +0000 (0:00:00.125) 0:01:03.165 ********** 2025-06-03 15:29:17.086953 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:29:17.087109 | orchestrator | 2025-06-03 15:29:17.087421 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-06-03 15:29:17.087859 | orchestrator | Tuesday 03 June 2025 15:29:17 +0000 (0:00:00.117) 0:01:03.282 ********** 2025-06-03 15:29:17.216581 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:29:17.216833 | orchestrator | 2025-06-03 15:29:17.218727 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-06-03 15:29:17.219036 | orchestrator | Tuesday 03 June 2025 15:29:17 +0000 (0:00:00.126) 0:01:03.409 ********** 2025-06-03 15:29:17.328616 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:29:17.329148 | orchestrator | 2025-06-03 15:29:17.329690 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-06-03 15:29:17.330981 | orchestrator | Tuesday 03 June 2025 15:29:17 +0000 (0:00:00.115) 0:01:03.524 ********** 2025-06-03 15:29:17.443323 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:29:17.443808 | orchestrator | 2025-06-03 15:29:17.444318 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-06-03 15:29:17.445113 | orchestrator | Tuesday 03 June 2025 15:29:17 +0000 (0:00:00.114) 0:01:03.638 ********** 2025-06-03 15:29:17.711919 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:29:17.712087 | orchestrator | 2025-06-03 15:29:17.713040 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-06-03 15:29:17.714646 | orchestrator | Tuesday 03 June 2025 15:29:17 +0000 (0:00:00.268) 0:01:03.907 ********** 2025-06-03 15:29:17.835419 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:29:17.835582 | orchestrator | 2025-06-03 15:29:17.835703 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-06-03 15:29:17.837801 | orchestrator | Tuesday 03 June 2025 15:29:17 +0000 (0:00:00.123) 0:01:04.031 ********** 2025-06-03 15:29:17.954875 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:29:17.955132 | orchestrator | 2025-06-03 15:29:17.956012 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-06-03 15:29:17.956805 | orchestrator | Tuesday 03 June 2025 15:29:17 +0000 (0:00:00.117) 0:01:04.149 ********** 2025-06-03 15:29:18.074931 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:29:18.075119 | orchestrator | 2025-06-03 15:29:18.075983 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-06-03 15:29:18.076429 | orchestrator | Tuesday 03 June 2025 15:29:18 +0000 (0:00:00.120) 0:01:04.270 ********** 2025-06-03 15:29:18.210088 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:29:18.210565 | orchestrator | 2025-06-03 15:29:18.211122 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-06-03 15:29:18.211775 | orchestrator | Tuesday 03 June 2025 15:29:18 +0000 (0:00:00.134) 0:01:04.405 ********** 2025-06-03 15:29:18.347334 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:29:18.347669 | orchestrator | 2025-06-03 15:29:18.348075 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-06-03 15:29:18.348539 | orchestrator | Tuesday 03 June 2025 15:29:18 +0000 (0:00:00.138) 0:01:04.543 ********** 2025-06-03 15:29:18.485226 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-610c71bb-335d-5813-8d53-12327c30775e', 'data_vg': 'ceph-610c71bb-335d-5813-8d53-12327c30775e'})  2025-06-03 15:29:18.485428 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ae8860ce-b651-5449-9c0b-e6c018225b94', 'data_vg': 'ceph-ae8860ce-b651-5449-9c0b-e6c018225b94'})  2025-06-03 15:29:18.486255 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:29:18.486939 | orchestrator | 2025-06-03 15:29:18.487577 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-06-03 15:29:18.488277 | orchestrator | Tuesday 03 June 2025 15:29:18 +0000 (0:00:00.135) 0:01:04.678 ********** 2025-06-03 15:29:18.622743 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-610c71bb-335d-5813-8d53-12327c30775e', 'data_vg': 'ceph-610c71bb-335d-5813-8d53-12327c30775e'})  2025-06-03 15:29:18.623534 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ae8860ce-b651-5449-9c0b-e6c018225b94', 'data_vg': 'ceph-ae8860ce-b651-5449-9c0b-e6c018225b94'})  2025-06-03 15:29:18.623631 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:29:18.624190 | orchestrator | 2025-06-03 15:29:18.624613 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-06-03 15:29:18.625113 | orchestrator | Tuesday 03 June 2025 15:29:18 +0000 (0:00:00.137) 0:01:04.816 ********** 2025-06-03 15:29:18.762270 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-610c71bb-335d-5813-8d53-12327c30775e', 'data_vg': 'ceph-610c71bb-335d-5813-8d53-12327c30775e'})  2025-06-03 15:29:18.762353 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ae8860ce-b651-5449-9c0b-e6c018225b94', 'data_vg': 'ceph-ae8860ce-b651-5449-9c0b-e6c018225b94'})  2025-06-03 15:29:18.762899 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:29:18.763827 | orchestrator | 2025-06-03 15:29:18.765420 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-06-03 15:29:18.765475 | orchestrator | Tuesday 03 June 2025 15:29:18 +0000 (0:00:00.140) 0:01:04.957 ********** 2025-06-03 15:29:18.909792 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-610c71bb-335d-5813-8d53-12327c30775e', 'data_vg': 'ceph-610c71bb-335d-5813-8d53-12327c30775e'})  2025-06-03 15:29:18.911123 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ae8860ce-b651-5449-9c0b-e6c018225b94', 'data_vg': 'ceph-ae8860ce-b651-5449-9c0b-e6c018225b94'})  2025-06-03 15:29:18.911150 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:29:18.911536 | orchestrator | 2025-06-03 15:29:18.912243 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-06-03 15:29:18.912567 | orchestrator | Tuesday 03 June 2025 15:29:18 +0000 (0:00:00.148) 0:01:05.105 ********** 2025-06-03 15:29:19.045438 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-610c71bb-335d-5813-8d53-12327c30775e', 'data_vg': 'ceph-610c71bb-335d-5813-8d53-12327c30775e'})  2025-06-03 15:29:19.046979 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ae8860ce-b651-5449-9c0b-e6c018225b94', 'data_vg': 'ceph-ae8860ce-b651-5449-9c0b-e6c018225b94'})  2025-06-03 15:29:19.047024 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:29:19.047341 | orchestrator | 2025-06-03 15:29:19.048036 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-06-03 15:29:19.048550 | orchestrator | Tuesday 03 June 2025 15:29:19 +0000 (0:00:00.134) 0:01:05.240 ********** 2025-06-03 15:29:19.173273 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-610c71bb-335d-5813-8d53-12327c30775e', 'data_vg': 'ceph-610c71bb-335d-5813-8d53-12327c30775e'})  2025-06-03 15:29:19.173778 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ae8860ce-b651-5449-9c0b-e6c018225b94', 'data_vg': 'ceph-ae8860ce-b651-5449-9c0b-e6c018225b94'})  2025-06-03 15:29:19.174623 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:29:19.175210 | orchestrator | 2025-06-03 15:29:19.175787 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-06-03 15:29:19.176311 | orchestrator | Tuesday 03 June 2025 15:29:19 +0000 (0:00:00.127) 0:01:05.368 ********** 2025-06-03 15:29:19.455649 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-610c71bb-335d-5813-8d53-12327c30775e', 'data_vg': 'ceph-610c71bb-335d-5813-8d53-12327c30775e'})  2025-06-03 15:29:19.455782 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ae8860ce-b651-5449-9c0b-e6c018225b94', 'data_vg': 'ceph-ae8860ce-b651-5449-9c0b-e6c018225b94'})  2025-06-03 15:29:19.456274 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:29:19.457417 | orchestrator | 2025-06-03 15:29:19.459176 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-06-03 15:29:19.459669 | orchestrator | Tuesday 03 June 2025 15:29:19 +0000 (0:00:00.282) 0:01:05.650 ********** 2025-06-03 15:29:19.615301 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-610c71bb-335d-5813-8d53-12327c30775e', 'data_vg': 'ceph-610c71bb-335d-5813-8d53-12327c30775e'})  2025-06-03 15:29:19.616861 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ae8860ce-b651-5449-9c0b-e6c018225b94', 'data_vg': 'ceph-ae8860ce-b651-5449-9c0b-e6c018225b94'})  2025-06-03 15:29:19.617779 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:29:19.618697 | orchestrator | 2025-06-03 15:29:19.619608 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-06-03 15:29:19.620200 | orchestrator | Tuesday 03 June 2025 15:29:19 +0000 (0:00:00.158) 0:01:05.809 ********** 2025-06-03 15:29:20.139155 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:29:20.140282 | orchestrator | 2025-06-03 15:29:20.140726 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-06-03 15:29:20.141162 | orchestrator | Tuesday 03 June 2025 15:29:20 +0000 (0:00:00.523) 0:01:06.333 ********** 2025-06-03 15:29:20.666215 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:29:20.666928 | orchestrator | 2025-06-03 15:29:20.668092 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-06-03 15:29:20.668686 | orchestrator | Tuesday 03 June 2025 15:29:20 +0000 (0:00:00.527) 0:01:06.861 ********** 2025-06-03 15:29:20.815056 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:29:20.815555 | orchestrator | 2025-06-03 15:29:20.816217 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-06-03 15:29:20.817307 | orchestrator | Tuesday 03 June 2025 15:29:20 +0000 (0:00:00.149) 0:01:07.010 ********** 2025-06-03 15:29:20.962330 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-610c71bb-335d-5813-8d53-12327c30775e', 'vg_name': 'ceph-610c71bb-335d-5813-8d53-12327c30775e'}) 2025-06-03 15:29:20.962500 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-ae8860ce-b651-5449-9c0b-e6c018225b94', 'vg_name': 'ceph-ae8860ce-b651-5449-9c0b-e6c018225b94'}) 2025-06-03 15:29:20.963270 | orchestrator | 2025-06-03 15:29:20.964226 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-06-03 15:29:20.965118 | orchestrator | Tuesday 03 June 2025 15:29:20 +0000 (0:00:00.147) 0:01:07.157 ********** 2025-06-03 15:29:21.102278 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-610c71bb-335d-5813-8d53-12327c30775e', 'data_vg': 'ceph-610c71bb-335d-5813-8d53-12327c30775e'})  2025-06-03 15:29:21.102946 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ae8860ce-b651-5449-9c0b-e6c018225b94', 'data_vg': 'ceph-ae8860ce-b651-5449-9c0b-e6c018225b94'})  2025-06-03 15:29:21.103882 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:29:21.104979 | orchestrator | 2025-06-03 15:29:21.105833 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-06-03 15:29:21.106897 | orchestrator | Tuesday 03 June 2025 15:29:21 +0000 (0:00:00.140) 0:01:07.298 ********** 2025-06-03 15:29:21.232457 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-610c71bb-335d-5813-8d53-12327c30775e', 'data_vg': 'ceph-610c71bb-335d-5813-8d53-12327c30775e'})  2025-06-03 15:29:21.232735 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ae8860ce-b651-5449-9c0b-e6c018225b94', 'data_vg': 'ceph-ae8860ce-b651-5449-9c0b-e6c018225b94'})  2025-06-03 15:29:21.233236 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:29:21.234003 | orchestrator | 2025-06-03 15:29:21.234730 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-06-03 15:29:21.235019 | orchestrator | Tuesday 03 June 2025 15:29:21 +0000 (0:00:00.130) 0:01:07.428 ********** 2025-06-03 15:29:21.375776 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-610c71bb-335d-5813-8d53-12327c30775e', 'data_vg': 'ceph-610c71bb-335d-5813-8d53-12327c30775e'})  2025-06-03 15:29:21.376580 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ae8860ce-b651-5449-9c0b-e6c018225b94', 'data_vg': 'ceph-ae8860ce-b651-5449-9c0b-e6c018225b94'})  2025-06-03 15:29:21.377162 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:29:21.377913 | orchestrator | 2025-06-03 15:29:21.378327 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-06-03 15:29:21.378875 | orchestrator | Tuesday 03 June 2025 15:29:21 +0000 (0:00:00.142) 0:01:07.571 ********** 2025-06-03 15:29:21.499883 | orchestrator | ok: [testbed-node-5] => { 2025-06-03 15:29:21.500496 | orchestrator |  "lvm_report": { 2025-06-03 15:29:21.501257 | orchestrator |  "lv": [ 2025-06-03 15:29:21.502194 | orchestrator |  { 2025-06-03 15:29:21.502857 | orchestrator |  "lv_name": "osd-block-610c71bb-335d-5813-8d53-12327c30775e", 2025-06-03 15:29:21.503466 | orchestrator |  "vg_name": "ceph-610c71bb-335d-5813-8d53-12327c30775e" 2025-06-03 15:29:21.504246 | orchestrator |  }, 2025-06-03 15:29:21.505113 | orchestrator |  { 2025-06-03 15:29:21.505744 | orchestrator |  "lv_name": "osd-block-ae8860ce-b651-5449-9c0b-e6c018225b94", 2025-06-03 15:29:21.506490 | orchestrator |  "vg_name": "ceph-ae8860ce-b651-5449-9c0b-e6c018225b94" 2025-06-03 15:29:21.507673 | orchestrator |  } 2025-06-03 15:29:21.508594 | orchestrator |  ], 2025-06-03 15:29:21.509731 | orchestrator |  "pv": [ 2025-06-03 15:29:21.510724 | orchestrator |  { 2025-06-03 15:29:21.510897 | orchestrator |  "pv_name": "/dev/sdb", 2025-06-03 15:29:21.511328 | orchestrator |  "vg_name": "ceph-610c71bb-335d-5813-8d53-12327c30775e" 2025-06-03 15:29:21.512060 | orchestrator |  }, 2025-06-03 15:29:21.512773 | orchestrator |  { 2025-06-03 15:29:21.513485 | orchestrator |  "pv_name": "/dev/sdc", 2025-06-03 15:29:21.514380 | orchestrator |  "vg_name": "ceph-ae8860ce-b651-5449-9c0b-e6c018225b94" 2025-06-03 15:29:21.514813 | orchestrator |  } 2025-06-03 15:29:21.515309 | orchestrator |  ] 2025-06-03 15:29:21.515814 | orchestrator |  } 2025-06-03 15:29:21.516221 | orchestrator | } 2025-06-03 15:29:21.516812 | orchestrator | 2025-06-03 15:29:21.517080 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-03 15:29:21.517304 | orchestrator | 2025-06-03 15:29:21 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-03 15:29:21.517394 | orchestrator | 2025-06-03 15:29:21 | INFO  | Please wait and do not abort execution. 2025-06-03 15:29:21.517904 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-06-03 15:29:21.518173 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-06-03 15:29:21.518757 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-06-03 15:29:21.519320 | orchestrator | 2025-06-03 15:29:21.519682 | orchestrator | 2025-06-03 15:29:21.519811 | orchestrator | 2025-06-03 15:29:21.520158 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-03 15:29:21.520685 | orchestrator | Tuesday 03 June 2025 15:29:21 +0000 (0:00:00.124) 0:01:07.695 ********** 2025-06-03 15:29:21.520717 | orchestrator | =============================================================================== 2025-06-03 15:29:21.521252 | orchestrator | Create block VGs -------------------------------------------------------- 5.72s 2025-06-03 15:29:21.521437 | orchestrator | Create block LVs -------------------------------------------------------- 4.03s 2025-06-03 15:29:21.521746 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.78s 2025-06-03 15:29:21.522011 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.56s 2025-06-03 15:29:21.522432 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.55s 2025-06-03 15:29:21.522667 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.52s 2025-06-03 15:29:21.522891 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.49s 2025-06-03 15:29:21.523146 | orchestrator | Add known partitions to the list of available block devices ------------- 1.36s 2025-06-03 15:29:21.523386 | orchestrator | Add known links to the list of available block devices ------------------ 1.16s 2025-06-03 15:29:21.523678 | orchestrator | Print LVM report data --------------------------------------------------- 0.89s 2025-06-03 15:29:21.523997 | orchestrator | Add known partitions to the list of available block devices ------------- 0.87s 2025-06-03 15:29:21.524187 | orchestrator | Add known partitions to the list of available block devices ------------- 0.85s 2025-06-03 15:29:21.524568 | orchestrator | Create dict of block VGs -> PVs from ceph_osd_devices ------------------- 0.79s 2025-06-03 15:29:21.524822 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.78s 2025-06-03 15:29:21.525105 | orchestrator | Add known links to the list of available block devices ------------------ 0.69s 2025-06-03 15:29:21.525309 | orchestrator | Get initial list of available block devices ----------------------------- 0.68s 2025-06-03 15:29:21.525673 | orchestrator | Add known partitions to the list of available block devices ------------- 0.67s 2025-06-03 15:29:21.526146 | orchestrator | Print 'Create WAL LVs for ceph_wal_devices' ----------------------------- 0.67s 2025-06-03 15:29:21.526287 | orchestrator | Add known partitions to the list of available block devices ------------- 0.65s 2025-06-03 15:29:21.526344 | orchestrator | Add known partitions to the list of available block devices ------------- 0.64s 2025-06-03 15:29:23.416220 | orchestrator | Registering Redlock._acquired_script 2025-06-03 15:29:23.416300 | orchestrator | Registering Redlock._extend_script 2025-06-03 15:29:23.416310 | orchestrator | Registering Redlock._release_script 2025-06-03 15:29:23.489304 | orchestrator | 2025-06-03 15:29:23 | INFO  | Task b2d05efe-2eb3-46c6-8395-f60af4bc48e4 (facts) was prepared for execution. 2025-06-03 15:29:23.489392 | orchestrator | 2025-06-03 15:29:23 | INFO  | It takes a moment until task b2d05efe-2eb3-46c6-8395-f60af4bc48e4 (facts) has been started and output is visible here. 2025-06-03 15:29:27.193632 | orchestrator | 2025-06-03 15:29:27.193751 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-06-03 15:29:27.193768 | orchestrator | 2025-06-03 15:29:27.193780 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-06-03 15:29:27.193791 | orchestrator | Tuesday 03 June 2025 15:29:27 +0000 (0:00:00.243) 0:00:00.243 ********** 2025-06-03 15:29:28.141680 | orchestrator | ok: [testbed-manager] 2025-06-03 15:29:28.144235 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:29:28.144270 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:29:28.144282 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:29:28.144291 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:29:28.144781 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:29:28.146141 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:29:28.146218 | orchestrator | 2025-06-03 15:29:28.146502 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-06-03 15:29:28.147108 | orchestrator | Tuesday 03 June 2025 15:29:28 +0000 (0:00:00.949) 0:00:01.193 ********** 2025-06-03 15:29:28.288079 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:29:28.371610 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:29:28.442648 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:29:28.512866 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:29:28.581774 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:29:29.269846 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:29:29.269951 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:29:29.270474 | orchestrator | 2025-06-03 15:29:29.271342 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-06-03 15:29:29.271977 | orchestrator | 2025-06-03 15:29:29.272812 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-06-03 15:29:29.273404 | orchestrator | Tuesday 03 June 2025 15:29:29 +0000 (0:00:01.126) 0:00:02.320 ********** 2025-06-03 15:29:33.957066 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:29:33.957281 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:29:33.958297 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:29:33.959076 | orchestrator | ok: [testbed-manager] 2025-06-03 15:29:33.959743 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:29:33.960480 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:29:33.961034 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:29:33.961918 | orchestrator | 2025-06-03 15:29:33.962865 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-06-03 15:29:33.963588 | orchestrator | 2025-06-03 15:29:33.964207 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-06-03 15:29:33.964597 | orchestrator | Tuesday 03 June 2025 15:29:33 +0000 (0:00:04.692) 0:00:07.012 ********** 2025-06-03 15:29:34.107586 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:29:34.175839 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:29:34.244331 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:29:34.313894 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:29:34.388335 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:29:34.415052 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:29:34.415169 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:29:34.415853 | orchestrator | 2025-06-03 15:29:34.416689 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-03 15:29:34.416926 | orchestrator | 2025-06-03 15:29:34 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-03 15:29:34.417146 | orchestrator | 2025-06-03 15:29:34 | INFO  | Please wait and do not abort execution. 2025-06-03 15:29:34.417691 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-03 15:29:34.418430 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-03 15:29:34.419451 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-03 15:29:34.419912 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-03 15:29:34.421303 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-03 15:29:34.422227 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-03 15:29:34.423159 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-03 15:29:34.423801 | orchestrator | 2025-06-03 15:29:34.424323 | orchestrator | 2025-06-03 15:29:34.425171 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-03 15:29:34.425898 | orchestrator | Tuesday 03 June 2025 15:29:34 +0000 (0:00:00.458) 0:00:07.470 ********** 2025-06-03 15:29:34.426345 | orchestrator | =============================================================================== 2025-06-03 15:29:34.427063 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.69s 2025-06-03 15:29:34.427802 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.13s 2025-06-03 15:29:34.428307 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 0.95s 2025-06-03 15:29:34.428931 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.46s 2025-06-03 15:29:34.873327 | orchestrator | 2025-06-03 15:29:34.874406 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Tue Jun 3 15:29:34 UTC 2025 2025-06-03 15:29:34.874450 | orchestrator | 2025-06-03 15:29:36.344921 | orchestrator | 2025-06-03 15:29:36 | INFO  | Collection nutshell is prepared for execution 2025-06-03 15:29:36.345018 | orchestrator | 2025-06-03 15:29:36 | INFO  | D [0] - dotfiles 2025-06-03 15:29:36.349244 | orchestrator | Registering Redlock._acquired_script 2025-06-03 15:29:36.349325 | orchestrator | Registering Redlock._extend_script 2025-06-03 15:29:36.349336 | orchestrator | Registering Redlock._release_script 2025-06-03 15:29:36.352870 | orchestrator | 2025-06-03 15:29:36 | INFO  | D [0] - homer 2025-06-03 15:29:36.352920 | orchestrator | 2025-06-03 15:29:36 | INFO  | D [0] - netdata 2025-06-03 15:29:36.352934 | orchestrator | 2025-06-03 15:29:36 | INFO  | D [0] - openstackclient 2025-06-03 15:29:36.353036 | orchestrator | 2025-06-03 15:29:36 | INFO  | D [0] - phpmyadmin 2025-06-03 15:29:36.353052 | orchestrator | 2025-06-03 15:29:36 | INFO  | A [0] - common 2025-06-03 15:29:36.355579 | orchestrator | 2025-06-03 15:29:36 | INFO  | A [1] -- loadbalancer 2025-06-03 15:29:36.355633 | orchestrator | 2025-06-03 15:29:36 | INFO  | D [2] --- opensearch 2025-06-03 15:29:36.355639 | orchestrator | 2025-06-03 15:29:36 | INFO  | A [2] --- mariadb-ng 2025-06-03 15:29:36.355644 | orchestrator | 2025-06-03 15:29:36 | INFO  | D [3] ---- horizon 2025-06-03 15:29:36.355649 | orchestrator | 2025-06-03 15:29:36 | INFO  | A [3] ---- keystone 2025-06-03 15:29:36.355653 | orchestrator | 2025-06-03 15:29:36 | INFO  | A [4] ----- neutron 2025-06-03 15:29:36.355658 | orchestrator | 2025-06-03 15:29:36 | INFO  | D [5] ------ wait-for-nova 2025-06-03 15:29:36.355663 | orchestrator | 2025-06-03 15:29:36 | INFO  | A [5] ------ octavia 2025-06-03 15:29:36.355667 | orchestrator | 2025-06-03 15:29:36 | INFO  | D [4] ----- barbican 2025-06-03 15:29:36.355670 | orchestrator | 2025-06-03 15:29:36 | INFO  | D [4] ----- designate 2025-06-03 15:29:36.355689 | orchestrator | 2025-06-03 15:29:36 | INFO  | D [4] ----- ironic 2025-06-03 15:29:36.355695 | orchestrator | 2025-06-03 15:29:36 | INFO  | D [4] ----- placement 2025-06-03 15:29:36.355702 | orchestrator | 2025-06-03 15:29:36 | INFO  | D [4] ----- magnum 2025-06-03 15:29:36.356287 | orchestrator | 2025-06-03 15:29:36 | INFO  | A [1] -- openvswitch 2025-06-03 15:29:36.356298 | orchestrator | 2025-06-03 15:29:36 | INFO  | D [2] --- ovn 2025-06-03 15:29:36.356302 | orchestrator | 2025-06-03 15:29:36 | INFO  | D [1] -- memcached 2025-06-03 15:29:36.356307 | orchestrator | 2025-06-03 15:29:36 | INFO  | D [1] -- redis 2025-06-03 15:29:36.356311 | orchestrator | 2025-06-03 15:29:36 | INFO  | D [1] -- rabbitmq-ng 2025-06-03 15:29:36.356316 | orchestrator | 2025-06-03 15:29:36 | INFO  | A [0] - kubernetes 2025-06-03 15:29:36.357633 | orchestrator | 2025-06-03 15:29:36 | INFO  | D [1] -- kubeconfig 2025-06-03 15:29:36.358057 | orchestrator | 2025-06-03 15:29:36 | INFO  | A [1] -- copy-kubeconfig 2025-06-03 15:29:36.358072 | orchestrator | 2025-06-03 15:29:36 | INFO  | A [0] - ceph 2025-06-03 15:29:36.359099 | orchestrator | 2025-06-03 15:29:36 | INFO  | A [1] -- ceph-pools 2025-06-03 15:29:36.359304 | orchestrator | 2025-06-03 15:29:36 | INFO  | A [2] --- copy-ceph-keys 2025-06-03 15:29:36.359312 | orchestrator | 2025-06-03 15:29:36 | INFO  | A [3] ---- cephclient 2025-06-03 15:29:36.359389 | orchestrator | 2025-06-03 15:29:36 | INFO  | D [4] ----- ceph-bootstrap-dashboard 2025-06-03 15:29:36.359792 | orchestrator | 2025-06-03 15:29:36 | INFO  | A [4] ----- wait-for-keystone 2025-06-03 15:29:36.359813 | orchestrator | 2025-06-03 15:29:36 | INFO  | D [5] ------ kolla-ceph-rgw 2025-06-03 15:29:36.359818 | orchestrator | 2025-06-03 15:29:36 | INFO  | D [5] ------ glance 2025-06-03 15:29:36.360022 | orchestrator | 2025-06-03 15:29:36 | INFO  | D [5] ------ cinder 2025-06-03 15:29:36.360029 | orchestrator | 2025-06-03 15:29:36 | INFO  | D [5] ------ nova 2025-06-03 15:29:36.360105 | orchestrator | 2025-06-03 15:29:36 | INFO  | A [4] ----- prometheus 2025-06-03 15:29:36.360287 | orchestrator | 2025-06-03 15:29:36 | INFO  | D [5] ------ grafana 2025-06-03 15:29:36.533206 | orchestrator | 2025-06-03 15:29:36 | INFO  | All tasks of the collection nutshell are prepared for execution 2025-06-03 15:29:36.533303 | orchestrator | 2025-06-03 15:29:36 | INFO  | Tasks are running in the background 2025-06-03 15:29:38.844551 | orchestrator | 2025-06-03 15:29:38 | INFO  | No task IDs specified, wait for all currently running tasks 2025-06-03 15:29:40.942249 | orchestrator | 2025-06-03 15:29:40 | INFO  | Task f8c1ecb0-7dd9-4c19-be17-8515f7a8dd27 is in state STARTED 2025-06-03 15:29:40.942329 | orchestrator | 2025-06-03 15:29:40 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:29:40.942336 | orchestrator | 2025-06-03 15:29:40 | INFO  | Task aa9d758e-4c93-4d11-a652-a4d1eba549e0 is in state STARTED 2025-06-03 15:29:40.945576 | orchestrator | 2025-06-03 15:29:40 | INFO  | Task 84823d4b-63e9-45e3-82c9-3927f6bd9022 is in state STARTED 2025-06-03 15:29:40.952548 | orchestrator | 2025-06-03 15:29:40 | INFO  | Task 4453c69d-74aa-49ec-9ad6-cb35d65e6976 is in state STARTED 2025-06-03 15:29:40.956145 | orchestrator | 2025-06-03 15:29:40 | INFO  | Task 36406ee6-44c4-413b-89af-a8fd35dcb6f9 is in state STARTED 2025-06-03 15:29:40.956179 | orchestrator | 2025-06-03 15:29:40 | INFO  | Task 277b44e5-5d14-42fe-990d-1929b436e41f is in state STARTED 2025-06-03 15:29:40.956191 | orchestrator | 2025-06-03 15:29:40 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:29:43.996021 | orchestrator | 2025-06-03 15:29:43 | INFO  | Task f8c1ecb0-7dd9-4c19-be17-8515f7a8dd27 is in state STARTED 2025-06-03 15:29:43.996127 | orchestrator | 2025-06-03 15:29:43 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:29:43.998277 | orchestrator | 2025-06-03 15:29:43 | INFO  | Task aa9d758e-4c93-4d11-a652-a4d1eba549e0 is in state STARTED 2025-06-03 15:29:43.998628 | orchestrator | 2025-06-03 15:29:43 | INFO  | Task 84823d4b-63e9-45e3-82c9-3927f6bd9022 is in state STARTED 2025-06-03 15:29:44.001369 | orchestrator | 2025-06-03 15:29:43 | INFO  | Task 4453c69d-74aa-49ec-9ad6-cb35d65e6976 is in state STARTED 2025-06-03 15:29:44.001780 | orchestrator | 2025-06-03 15:29:43 | INFO  | Task 36406ee6-44c4-413b-89af-a8fd35dcb6f9 is in state STARTED 2025-06-03 15:29:44.005480 | orchestrator | 2025-06-03 15:29:44 | INFO  | Task 277b44e5-5d14-42fe-990d-1929b436e41f is in state STARTED 2025-06-03 15:29:44.005518 | orchestrator | 2025-06-03 15:29:44 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:29:47.033820 | orchestrator | 2025-06-03 15:29:47 | INFO  | Task f8c1ecb0-7dd9-4c19-be17-8515f7a8dd27 is in state STARTED 2025-06-03 15:29:47.034962 | orchestrator | 2025-06-03 15:29:47 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:29:47.035438 | orchestrator | 2025-06-03 15:29:47 | INFO  | Task aa9d758e-4c93-4d11-a652-a4d1eba549e0 is in state STARTED 2025-06-03 15:29:47.036047 | orchestrator | 2025-06-03 15:29:47 | INFO  | Task 84823d4b-63e9-45e3-82c9-3927f6bd9022 is in state STARTED 2025-06-03 15:29:47.036657 | orchestrator | 2025-06-03 15:29:47 | INFO  | Task 4453c69d-74aa-49ec-9ad6-cb35d65e6976 is in state STARTED 2025-06-03 15:29:47.037027 | orchestrator | 2025-06-03 15:29:47 | INFO  | Task 36406ee6-44c4-413b-89af-a8fd35dcb6f9 is in state STARTED 2025-06-03 15:29:47.038280 | orchestrator | 2025-06-03 15:29:47 | INFO  | Task 277b44e5-5d14-42fe-990d-1929b436e41f is in state STARTED 2025-06-03 15:29:47.038331 | orchestrator | 2025-06-03 15:29:47 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:29:50.083229 | orchestrator | 2025-06-03 15:29:50 | INFO  | Task f8c1ecb0-7dd9-4c19-be17-8515f7a8dd27 is in state STARTED 2025-06-03 15:29:50.083351 | orchestrator | 2025-06-03 15:29:50 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:29:50.083483 | orchestrator | 2025-06-03 15:29:50 | INFO  | Task aa9d758e-4c93-4d11-a652-a4d1eba549e0 is in state STARTED 2025-06-03 15:29:50.084693 | orchestrator | 2025-06-03 15:29:50 | INFO  | Task 84823d4b-63e9-45e3-82c9-3927f6bd9022 is in state STARTED 2025-06-03 15:29:50.086193 | orchestrator | 2025-06-03 15:29:50 | INFO  | Task 4453c69d-74aa-49ec-9ad6-cb35d65e6976 is in state STARTED 2025-06-03 15:29:50.088061 | orchestrator | 2025-06-03 15:29:50 | INFO  | Task 36406ee6-44c4-413b-89af-a8fd35dcb6f9 is in state STARTED 2025-06-03 15:29:50.090161 | orchestrator | 2025-06-03 15:29:50 | INFO  | Task 277b44e5-5d14-42fe-990d-1929b436e41f is in state STARTED 2025-06-03 15:29:50.090208 | orchestrator | 2025-06-03 15:29:50 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:29:53.127692 | orchestrator | 2025-06-03 15:29:53 | INFO  | Task f8c1ecb0-7dd9-4c19-be17-8515f7a8dd27 is in state STARTED 2025-06-03 15:29:53.129782 | orchestrator | 2025-06-03 15:29:53 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:29:53.129828 | orchestrator | 2025-06-03 15:29:53 | INFO  | Task aa9d758e-4c93-4d11-a652-a4d1eba549e0 is in state STARTED 2025-06-03 15:29:53.129840 | orchestrator | 2025-06-03 15:29:53 | INFO  | Task 84823d4b-63e9-45e3-82c9-3927f6bd9022 is in state STARTED 2025-06-03 15:29:53.130467 | orchestrator | 2025-06-03 15:29:53 | INFO  | Task 4453c69d-74aa-49ec-9ad6-cb35d65e6976 is in state STARTED 2025-06-03 15:29:53.132133 | orchestrator | 2025-06-03 15:29:53 | INFO  | Task 36406ee6-44c4-413b-89af-a8fd35dcb6f9 is in state STARTED 2025-06-03 15:29:53.135762 | orchestrator | 2025-06-03 15:29:53 | INFO  | Task 277b44e5-5d14-42fe-990d-1929b436e41f is in state STARTED 2025-06-03 15:29:53.135870 | orchestrator | 2025-06-03 15:29:53 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:29:56.188410 | orchestrator | 2025-06-03 15:29:56 | INFO  | Task f8c1ecb0-7dd9-4c19-be17-8515f7a8dd27 is in state STARTED 2025-06-03 15:29:56.196618 | orchestrator | 2025-06-03 15:29:56 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:29:56.196671 | orchestrator | 2025-06-03 15:29:56 | INFO  | Task aa9d758e-4c93-4d11-a652-a4d1eba549e0 is in state STARTED 2025-06-03 15:29:56.196677 | orchestrator | 2025-06-03 15:29:56 | INFO  | Task 84823d4b-63e9-45e3-82c9-3927f6bd9022 is in state STARTED 2025-06-03 15:29:56.196682 | orchestrator | 2025-06-03 15:29:56 | INFO  | Task 4453c69d-74aa-49ec-9ad6-cb35d65e6976 is in state STARTED 2025-06-03 15:29:56.197850 | orchestrator | 2025-06-03 15:29:56 | INFO  | Task 36406ee6-44c4-413b-89af-a8fd35dcb6f9 is in state STARTED 2025-06-03 15:29:56.200864 | orchestrator | 2025-06-03 15:29:56 | INFO  | Task 277b44e5-5d14-42fe-990d-1929b436e41f is in state STARTED 2025-06-03 15:29:56.200895 | orchestrator | 2025-06-03 15:29:56 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:29:59.257325 | orchestrator | 2025-06-03 15:29:59 | INFO  | Task f8c1ecb0-7dd9-4c19-be17-8515f7a8dd27 is in state STARTED 2025-06-03 15:29:59.260435 | orchestrator | 2025-06-03 15:29:59 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:29:59.264261 | orchestrator | 2025-06-03 15:29:59 | INFO  | Task aa9d758e-4c93-4d11-a652-a4d1eba549e0 is in state STARTED 2025-06-03 15:29:59.264287 | orchestrator | 2025-06-03 15:29:59 | INFO  | Task 84823d4b-63e9-45e3-82c9-3927f6bd9022 is in state STARTED 2025-06-03 15:29:59.264299 | orchestrator | 2025-06-03 15:29:59 | INFO  | Task 4453c69d-74aa-49ec-9ad6-cb35d65e6976 is in state STARTED 2025-06-03 15:29:59.264310 | orchestrator | 2025-06-03 15:29:59 | INFO  | Task 36406ee6-44c4-413b-89af-a8fd35dcb6f9 is in state STARTED 2025-06-03 15:29:59.264322 | orchestrator | 2025-06-03 15:29:59 | INFO  | Task 277b44e5-5d14-42fe-990d-1929b436e41f is in state STARTED 2025-06-03 15:29:59.264333 | orchestrator | 2025-06-03 15:29:59 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:30:02.344185 | orchestrator | 2025-06-03 15:30:02 | INFO  | Task f8c1ecb0-7dd9-4c19-be17-8515f7a8dd27 is in state STARTED 2025-06-03 15:30:02.344259 | orchestrator | 2025-06-03 15:30:02 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:30:02.350105 | orchestrator | 2025-06-03 15:30:02 | INFO  | Task aa9d758e-4c93-4d11-a652-a4d1eba549e0 is in state STARTED 2025-06-03 15:30:02.350149 | orchestrator | 2025-06-03 15:30:02 | INFO  | Task 84823d4b-63e9-45e3-82c9-3927f6bd9022 is in state STARTED 2025-06-03 15:30:02.359937 | orchestrator | 2025-06-03 15:30:02 | INFO  | Task 4453c69d-74aa-49ec-9ad6-cb35d65e6976 is in state STARTED 2025-06-03 15:30:02.359977 | orchestrator | 2025-06-03 15:30:02 | INFO  | Task 36406ee6-44c4-413b-89af-a8fd35dcb6f9 is in state STARTED 2025-06-03 15:30:02.363745 | orchestrator | 2025-06-03 15:30:02 | INFO  | Task 277b44e5-5d14-42fe-990d-1929b436e41f is in state STARTED 2025-06-03 15:30:02.363767 | orchestrator | 2025-06-03 15:30:02 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:30:05.443893 | orchestrator | 2025-06-03 15:30:05 | INFO  | Task f8c1ecb0-7dd9-4c19-be17-8515f7a8dd27 is in state STARTED 2025-06-03 15:30:05.447445 | orchestrator | 2025-06-03 15:30:05 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:30:05.448677 | orchestrator | 2025-06-03 15:30:05 | INFO  | Task bddd135d-8498-40f3-8a22-9627c5919e72 is in state STARTED 2025-06-03 15:30:05.449136 | orchestrator | 2025-06-03 15:30:05 | INFO  | Task aa9d758e-4c93-4d11-a652-a4d1eba549e0 is in state STARTED 2025-06-03 15:30:05.449930 | orchestrator | 2025-06-03 15:30:05 | INFO  | Task 84823d4b-63e9-45e3-82c9-3927f6bd9022 is in state STARTED 2025-06-03 15:30:05.451100 | orchestrator | 2025-06-03 15:30:05 | INFO  | Task 4453c69d-74aa-49ec-9ad6-cb35d65e6976 is in state STARTED 2025-06-03 15:30:05.452229 | orchestrator | 2025-06-03 15:30:05 | INFO  | Task 36406ee6-44c4-413b-89af-a8fd35dcb6f9 is in state SUCCESS 2025-06-03 15:30:05.455080 | orchestrator | 2025-06-03 15:30:05.455166 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2025-06-03 15:30:05.455181 | orchestrator | 2025-06-03 15:30:05.455194 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2025-06-03 15:30:05.455204 | orchestrator | Tuesday 03 June 2025 15:29:47 +0000 (0:00:00.598) 0:00:00.598 ********** 2025-06-03 15:30:05.455216 | orchestrator | changed: [testbed-manager] 2025-06-03 15:30:05.455228 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:30:05.455239 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:30:05.455250 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:30:05.455260 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:30:05.455271 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:30:05.455282 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:30:05.455293 | orchestrator | 2025-06-03 15:30:05.455304 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2025-06-03 15:30:05.455315 | orchestrator | Tuesday 03 June 2025 15:29:52 +0000 (0:00:04.657) 0:00:05.256 ********** 2025-06-03 15:30:05.455327 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-06-03 15:30:05.455339 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-06-03 15:30:05.455350 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-06-03 15:30:05.455360 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-06-03 15:30:05.455371 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-06-03 15:30:05.455382 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-06-03 15:30:05.455393 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-06-03 15:30:05.455403 | orchestrator | 2025-06-03 15:30:05.455423 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2025-06-03 15:30:05.455456 | orchestrator | Tuesday 03 June 2025 15:29:54 +0000 (0:00:02.037) 0:00:07.293 ********** 2025-06-03 15:30:05.455472 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-06-03 15:29:53.226600', 'end': '2025-06-03 15:29:53.235291', 'delta': '0:00:00.008691', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-06-03 15:30:05.455487 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-06-03 15:29:53.111611', 'end': '2025-06-03 15:29:53.122536', 'delta': '0:00:00.010925', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-06-03 15:30:05.455500 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-06-03 15:29:52.967048', 'end': '2025-06-03 15:29:52.970691', 'delta': '0:00:00.003643', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-06-03 15:30:05.455571 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-06-03 15:29:53.453926', 'end': '2025-06-03 15:29:53.462075', 'delta': '0:00:00.008149', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-06-03 15:30:05.455601 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-06-03 15:29:53.614868', 'end': '2025-06-03 15:29:53.624051', 'delta': '0:00:00.009183', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-06-03 15:30:05.455639 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-06-03 15:29:53.710915', 'end': '2025-06-03 15:29:53.720862', 'delta': '0:00:00.009947', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-06-03 15:30:05.455662 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-06-03 15:29:53.878764', 'end': '2025-06-03 15:29:53.887716', 'delta': '0:00:00.008952', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-06-03 15:30:05.455682 | orchestrator | 2025-06-03 15:30:05.455701 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2025-06-03 15:30:05.455712 | orchestrator | Tuesday 03 June 2025 15:29:56 +0000 (0:00:02.491) 0:00:09.785 ********** 2025-06-03 15:30:05.455723 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-06-03 15:30:05.455734 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-06-03 15:30:05.455745 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-06-03 15:30:05.455757 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-06-03 15:30:05.455775 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-06-03 15:30:05.455798 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-06-03 15:30:05.455823 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-06-03 15:30:05.455840 | orchestrator | 2025-06-03 15:30:05.455858 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2025-06-03 15:30:05.455878 | orchestrator | Tuesday 03 June 2025 15:29:58 +0000 (0:00:02.199) 0:00:11.984 ********** 2025-06-03 15:30:05.455896 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2025-06-03 15:30:05.455914 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2025-06-03 15:30:05.455925 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2025-06-03 15:30:05.455935 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2025-06-03 15:30:05.455946 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2025-06-03 15:30:05.455957 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2025-06-03 15:30:05.455968 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2025-06-03 15:30:05.455979 | orchestrator | 2025-06-03 15:30:05.455990 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-03 15:30:05.456012 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-03 15:30:05.456025 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-03 15:30:05.456047 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-03 15:30:05.456058 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-03 15:30:05.456069 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-03 15:30:05.456080 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-03 15:30:05.456091 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-03 15:30:05.456101 | orchestrator | 2025-06-03 15:30:05.456112 | orchestrator | 2025-06-03 15:30:05.456124 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-03 15:30:05.456135 | orchestrator | Tuesday 03 June 2025 15:30:03 +0000 (0:00:04.156) 0:00:16.141 ********** 2025-06-03 15:30:05.456146 | orchestrator | =============================================================================== 2025-06-03 15:30:05.456157 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 4.66s 2025-06-03 15:30:05.456168 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 4.16s 2025-06-03 15:30:05.456179 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 2.49s 2025-06-03 15:30:05.456190 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 2.20s 2025-06-03 15:30:05.456201 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 2.04s 2025-06-03 15:30:05.456307 | orchestrator | 2025-06-03 15:30:05 | INFO  | Task 277b44e5-5d14-42fe-990d-1929b436e41f is in state STARTED 2025-06-03 15:30:05.456323 | orchestrator | 2025-06-03 15:30:05 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:30:08.488309 | orchestrator | 2025-06-03 15:30:08 | INFO  | Task f8c1ecb0-7dd9-4c19-be17-8515f7a8dd27 is in state STARTED 2025-06-03 15:30:08.488859 | orchestrator | 2025-06-03 15:30:08 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:30:08.489634 | orchestrator | 2025-06-03 15:30:08 | INFO  | Task bddd135d-8498-40f3-8a22-9627c5919e72 is in state STARTED 2025-06-03 15:30:08.489679 | orchestrator | 2025-06-03 15:30:08 | INFO  | Task aa9d758e-4c93-4d11-a652-a4d1eba549e0 is in state STARTED 2025-06-03 15:30:08.490109 | orchestrator | 2025-06-03 15:30:08 | INFO  | Task 84823d4b-63e9-45e3-82c9-3927f6bd9022 is in state STARTED 2025-06-03 15:30:08.490359 | orchestrator | 2025-06-03 15:30:08 | INFO  | Task 4453c69d-74aa-49ec-9ad6-cb35d65e6976 is in state STARTED 2025-06-03 15:30:08.491026 | orchestrator | 2025-06-03 15:30:08 | INFO  | Task 277b44e5-5d14-42fe-990d-1929b436e41f is in state STARTED 2025-06-03 15:30:08.491205 | orchestrator | 2025-06-03 15:30:08 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:30:11.569361 | orchestrator | 2025-06-03 15:30:11 | INFO  | Task f8c1ecb0-7dd9-4c19-be17-8515f7a8dd27 is in state STARTED 2025-06-03 15:30:11.574284 | orchestrator | 2025-06-03 15:30:11 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:30:11.578993 | orchestrator | 2025-06-03 15:30:11 | INFO  | Task bddd135d-8498-40f3-8a22-9627c5919e72 is in state STARTED 2025-06-03 15:30:11.586227 | orchestrator | 2025-06-03 15:30:11 | INFO  | Task aa9d758e-4c93-4d11-a652-a4d1eba549e0 is in state STARTED 2025-06-03 15:30:11.588975 | orchestrator | 2025-06-03 15:30:11 | INFO  | Task 84823d4b-63e9-45e3-82c9-3927f6bd9022 is in state STARTED 2025-06-03 15:30:11.592597 | orchestrator | 2025-06-03 15:30:11 | INFO  | Task 4453c69d-74aa-49ec-9ad6-cb35d65e6976 is in state STARTED 2025-06-03 15:30:11.598934 | orchestrator | 2025-06-03 15:30:11 | INFO  | Task 277b44e5-5d14-42fe-990d-1929b436e41f is in state STARTED 2025-06-03 15:30:11.598991 | orchestrator | 2025-06-03 15:30:11 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:30:14.651798 | orchestrator | 2025-06-03 15:30:14 | INFO  | Task f8c1ecb0-7dd9-4c19-be17-8515f7a8dd27 is in state STARTED 2025-06-03 15:30:14.654101 | orchestrator | 2025-06-03 15:30:14 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:30:14.655164 | orchestrator | 2025-06-03 15:30:14 | INFO  | Task bddd135d-8498-40f3-8a22-9627c5919e72 is in state STARTED 2025-06-03 15:30:14.657800 | orchestrator | 2025-06-03 15:30:14 | INFO  | Task aa9d758e-4c93-4d11-a652-a4d1eba549e0 is in state STARTED 2025-06-03 15:30:14.658754 | orchestrator | 2025-06-03 15:30:14 | INFO  | Task 84823d4b-63e9-45e3-82c9-3927f6bd9022 is in state STARTED 2025-06-03 15:30:14.663530 | orchestrator | 2025-06-03 15:30:14 | INFO  | Task 4453c69d-74aa-49ec-9ad6-cb35d65e6976 is in state STARTED 2025-06-03 15:30:14.665325 | orchestrator | 2025-06-03 15:30:14 | INFO  | Task 277b44e5-5d14-42fe-990d-1929b436e41f is in state STARTED 2025-06-03 15:30:14.665490 | orchestrator | 2025-06-03 15:30:14 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:30:17.712268 | orchestrator | 2025-06-03 15:30:17 | INFO  | Task f8c1ecb0-7dd9-4c19-be17-8515f7a8dd27 is in state STARTED 2025-06-03 15:30:17.712459 | orchestrator | 2025-06-03 15:30:17 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:30:17.716861 | orchestrator | 2025-06-03 15:30:17 | INFO  | Task bddd135d-8498-40f3-8a22-9627c5919e72 is in state STARTED 2025-06-03 15:30:17.718330 | orchestrator | 2025-06-03 15:30:17 | INFO  | Task aa9d758e-4c93-4d11-a652-a4d1eba549e0 is in state STARTED 2025-06-03 15:30:17.719042 | orchestrator | 2025-06-03 15:30:17 | INFO  | Task 84823d4b-63e9-45e3-82c9-3927f6bd9022 is in state STARTED 2025-06-03 15:30:17.719997 | orchestrator | 2025-06-03 15:30:17 | INFO  | Task 4453c69d-74aa-49ec-9ad6-cb35d65e6976 is in state STARTED 2025-06-03 15:30:17.722884 | orchestrator | 2025-06-03 15:30:17 | INFO  | Task 277b44e5-5d14-42fe-990d-1929b436e41f is in state STARTED 2025-06-03 15:30:17.722932 | orchestrator | 2025-06-03 15:30:17 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:30:20.781871 | orchestrator | 2025-06-03 15:30:20 | INFO  | Task f8c1ecb0-7dd9-4c19-be17-8515f7a8dd27 is in state STARTED 2025-06-03 15:30:20.783263 | orchestrator | 2025-06-03 15:30:20 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:30:20.787252 | orchestrator | 2025-06-03 15:30:20 | INFO  | Task bddd135d-8498-40f3-8a22-9627c5919e72 is in state STARTED 2025-06-03 15:30:20.795869 | orchestrator | 2025-06-03 15:30:20 | INFO  | Task aa9d758e-4c93-4d11-a652-a4d1eba549e0 is in state STARTED 2025-06-03 15:30:20.795952 | orchestrator | 2025-06-03 15:30:20 | INFO  | Task 84823d4b-63e9-45e3-82c9-3927f6bd9022 is in state STARTED 2025-06-03 15:30:20.796293 | orchestrator | 2025-06-03 15:30:20 | INFO  | Task 4453c69d-74aa-49ec-9ad6-cb35d65e6976 is in state STARTED 2025-06-03 15:30:20.797788 | orchestrator | 2025-06-03 15:30:20 | INFO  | Task 277b44e5-5d14-42fe-990d-1929b436e41f is in state STARTED 2025-06-03 15:30:20.797810 | orchestrator | 2025-06-03 15:30:20 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:30:23.854498 | orchestrator | 2025-06-03 15:30:23 | INFO  | Task f8c1ecb0-7dd9-4c19-be17-8515f7a8dd27 is in state STARTED 2025-06-03 15:30:23.855920 | orchestrator | 2025-06-03 15:30:23 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:30:23.857816 | orchestrator | 2025-06-03 15:30:23 | INFO  | Task bddd135d-8498-40f3-8a22-9627c5919e72 is in state STARTED 2025-06-03 15:30:23.857957 | orchestrator | 2025-06-03 15:30:23 | INFO  | Task aa9d758e-4c93-4d11-a652-a4d1eba549e0 is in state STARTED 2025-06-03 15:30:23.859599 | orchestrator | 2025-06-03 15:30:23 | INFO  | Task 84823d4b-63e9-45e3-82c9-3927f6bd9022 is in state SUCCESS 2025-06-03 15:30:23.859639 | orchestrator | 2025-06-03 15:30:23 | INFO  | Task 4453c69d-74aa-49ec-9ad6-cb35d65e6976 is in state STARTED 2025-06-03 15:30:23.859648 | orchestrator | 2025-06-03 15:30:23 | INFO  | Task 277b44e5-5d14-42fe-990d-1929b436e41f is in state STARTED 2025-06-03 15:30:23.861204 | orchestrator | 2025-06-03 15:30:23 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:30:26.897080 | orchestrator | 2025-06-03 15:30:26 | INFO  | Task f8c1ecb0-7dd9-4c19-be17-8515f7a8dd27 is in state STARTED 2025-06-03 15:30:26.897584 | orchestrator | 2025-06-03 15:30:26 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:30:26.898379 | orchestrator | 2025-06-03 15:30:26 | INFO  | Task bddd135d-8498-40f3-8a22-9627c5919e72 is in state STARTED 2025-06-03 15:30:26.899232 | orchestrator | 2025-06-03 15:30:26 | INFO  | Task aa9d758e-4c93-4d11-a652-a4d1eba549e0 is in state STARTED 2025-06-03 15:30:26.899798 | orchestrator | 2025-06-03 15:30:26 | INFO  | Task 4453c69d-74aa-49ec-9ad6-cb35d65e6976 is in state STARTED 2025-06-03 15:30:26.903116 | orchestrator | 2025-06-03 15:30:26 | INFO  | Task 277b44e5-5d14-42fe-990d-1929b436e41f is in state STARTED 2025-06-03 15:30:26.903150 | orchestrator | 2025-06-03 15:30:26 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:30:29.954243 | orchestrator | 2025-06-03 15:30:29 | INFO  | Task f8c1ecb0-7dd9-4c19-be17-8515f7a8dd27 is in state STARTED 2025-06-03 15:30:29.954360 | orchestrator | 2025-06-03 15:30:29 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:30:29.954375 | orchestrator | 2025-06-03 15:30:29 | INFO  | Task bddd135d-8498-40f3-8a22-9627c5919e72 is in state STARTED 2025-06-03 15:30:29.954384 | orchestrator | 2025-06-03 15:30:29 | INFO  | Task aa9d758e-4c93-4d11-a652-a4d1eba549e0 is in state STARTED 2025-06-03 15:30:29.954393 | orchestrator | 2025-06-03 15:30:29 | INFO  | Task 4453c69d-74aa-49ec-9ad6-cb35d65e6976 is in state STARTED 2025-06-03 15:30:29.954402 | orchestrator | 2025-06-03 15:30:29 | INFO  | Task 277b44e5-5d14-42fe-990d-1929b436e41f is in state STARTED 2025-06-03 15:30:29.954411 | orchestrator | 2025-06-03 15:30:29 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:30:33.003231 | orchestrator | 2025-06-03 15:30:33 | INFO  | Task f8c1ecb0-7dd9-4c19-be17-8515f7a8dd27 is in state STARTED 2025-06-03 15:30:33.011026 | orchestrator | 2025-06-03 15:30:33 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:30:33.012290 | orchestrator | 2025-06-03 15:30:33 | INFO  | Task bddd135d-8498-40f3-8a22-9627c5919e72 is in state STARTED 2025-06-03 15:30:33.014919 | orchestrator | 2025-06-03 15:30:33 | INFO  | Task aa9d758e-4c93-4d11-a652-a4d1eba549e0 is in state STARTED 2025-06-03 15:30:33.018172 | orchestrator | 2025-06-03 15:30:33 | INFO  | Task 4453c69d-74aa-49ec-9ad6-cb35d65e6976 is in state STARTED 2025-06-03 15:30:33.018222 | orchestrator | 2025-06-03 15:30:33 | INFO  | Task 277b44e5-5d14-42fe-990d-1929b436e41f is in state STARTED 2025-06-03 15:30:33.018240 | orchestrator | 2025-06-03 15:30:33 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:30:36.075253 | orchestrator | 2025-06-03 15:30:36 | INFO  | Task f8c1ecb0-7dd9-4c19-be17-8515f7a8dd27 is in state SUCCESS 2025-06-03 15:30:36.076688 | orchestrator | 2025-06-03 15:30:36 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:30:36.077228 | orchestrator | 2025-06-03 15:30:36 | INFO  | Task bddd135d-8498-40f3-8a22-9627c5919e72 is in state STARTED 2025-06-03 15:30:36.077974 | orchestrator | 2025-06-03 15:30:36 | INFO  | Task aa9d758e-4c93-4d11-a652-a4d1eba549e0 is in state STARTED 2025-06-03 15:30:36.080463 | orchestrator | 2025-06-03 15:30:36 | INFO  | Task 4453c69d-74aa-49ec-9ad6-cb35d65e6976 is in state STARTED 2025-06-03 15:30:36.081993 | orchestrator | 2025-06-03 15:30:36 | INFO  | Task 277b44e5-5d14-42fe-990d-1929b436e41f is in state STARTED 2025-06-03 15:30:36.082071 | orchestrator | 2025-06-03 15:30:36 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:30:39.134115 | orchestrator | 2025-06-03 15:30:39 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:30:39.143814 | orchestrator | 2025-06-03 15:30:39 | INFO  | Task bddd135d-8498-40f3-8a22-9627c5919e72 is in state STARTED 2025-06-03 15:30:39.143923 | orchestrator | 2025-06-03 15:30:39 | INFO  | Task aa9d758e-4c93-4d11-a652-a4d1eba549e0 is in state STARTED 2025-06-03 15:30:39.143944 | orchestrator | 2025-06-03 15:30:39 | INFO  | Task 4453c69d-74aa-49ec-9ad6-cb35d65e6976 is in state STARTED 2025-06-03 15:30:39.143960 | orchestrator | 2025-06-03 15:30:39 | INFO  | Task 277b44e5-5d14-42fe-990d-1929b436e41f is in state STARTED 2025-06-03 15:30:39.143977 | orchestrator | 2025-06-03 15:30:39 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:30:42.185244 | orchestrator | 2025-06-03 15:30:42 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:30:42.185330 | orchestrator | 2025-06-03 15:30:42 | INFO  | Task bddd135d-8498-40f3-8a22-9627c5919e72 is in state STARTED 2025-06-03 15:30:42.185630 | orchestrator | 2025-06-03 15:30:42 | INFO  | Task aa9d758e-4c93-4d11-a652-a4d1eba549e0 is in state STARTED 2025-06-03 15:30:42.186623 | orchestrator | 2025-06-03 15:30:42 | INFO  | Task 4453c69d-74aa-49ec-9ad6-cb35d65e6976 is in state STARTED 2025-06-03 15:30:42.188245 | orchestrator | 2025-06-03 15:30:42 | INFO  | Task 277b44e5-5d14-42fe-990d-1929b436e41f is in state STARTED 2025-06-03 15:30:42.188376 | orchestrator | 2025-06-03 15:30:42 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:30:45.229892 | orchestrator | 2025-06-03 15:30:45 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:30:45.230187 | orchestrator | 2025-06-03 15:30:45 | INFO  | Task bddd135d-8498-40f3-8a22-9627c5919e72 is in state STARTED 2025-06-03 15:30:45.230976 | orchestrator | 2025-06-03 15:30:45 | INFO  | Task aa9d758e-4c93-4d11-a652-a4d1eba549e0 is in state STARTED 2025-06-03 15:30:45.231602 | orchestrator | 2025-06-03 15:30:45 | INFO  | Task 4453c69d-74aa-49ec-9ad6-cb35d65e6976 is in state STARTED 2025-06-03 15:30:45.233817 | orchestrator | 2025-06-03 15:30:45 | INFO  | Task 277b44e5-5d14-42fe-990d-1929b436e41f is in state STARTED 2025-06-03 15:30:45.233842 | orchestrator | 2025-06-03 15:30:45 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:30:48.272102 | orchestrator | 2025-06-03 15:30:48 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:30:48.272444 | orchestrator | 2025-06-03 15:30:48 | INFO  | Task bddd135d-8498-40f3-8a22-9627c5919e72 is in state STARTED 2025-06-03 15:30:48.279790 | orchestrator | 2025-06-03 15:30:48.279841 | orchestrator | 2025-06-03 15:30:48.279851 | orchestrator | PLAY [Apply role homer] ******************************************************** 2025-06-03 15:30:48.279880 | orchestrator | 2025-06-03 15:30:48.279889 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2025-06-03 15:30:48.279898 | orchestrator | Tuesday 03 June 2025 15:29:47 +0000 (0:00:00.584) 0:00:00.584 ********** 2025-06-03 15:30:48.279906 | orchestrator | ok: [testbed-manager] => { 2025-06-03 15:30:48.279915 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2025-06-03 15:30:48.279925 | orchestrator | } 2025-06-03 15:30:48.279933 | orchestrator | 2025-06-03 15:30:48.279941 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2025-06-03 15:30:48.279949 | orchestrator | Tuesday 03 June 2025 15:29:48 +0000 (0:00:00.514) 0:00:01.099 ********** 2025-06-03 15:30:48.279957 | orchestrator | ok: [testbed-manager] 2025-06-03 15:30:48.279965 | orchestrator | 2025-06-03 15:30:48.279974 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2025-06-03 15:30:48.279982 | orchestrator | Tuesday 03 June 2025 15:29:50 +0000 (0:00:01.908) 0:00:03.008 ********** 2025-06-03 15:30:48.279990 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2025-06-03 15:30:48.279998 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2025-06-03 15:30:48.280006 | orchestrator | 2025-06-03 15:30:48.280014 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2025-06-03 15:30:48.280022 | orchestrator | Tuesday 03 June 2025 15:29:51 +0000 (0:00:01.311) 0:00:04.320 ********** 2025-06-03 15:30:48.280029 | orchestrator | changed: [testbed-manager] 2025-06-03 15:30:48.280037 | orchestrator | 2025-06-03 15:30:48.280045 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2025-06-03 15:30:48.280053 | orchestrator | Tuesday 03 June 2025 15:29:53 +0000 (0:00:02.321) 0:00:06.641 ********** 2025-06-03 15:30:48.280060 | orchestrator | changed: [testbed-manager] 2025-06-03 15:30:48.280068 | orchestrator | 2025-06-03 15:30:48.280076 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2025-06-03 15:30:48.280084 | orchestrator | Tuesday 03 June 2025 15:29:55 +0000 (0:00:02.038) 0:00:08.679 ********** 2025-06-03 15:30:48.280092 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2025-06-03 15:30:48.280099 | orchestrator | ok: [testbed-manager] 2025-06-03 15:30:48.280107 | orchestrator | 2025-06-03 15:30:48.280115 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2025-06-03 15:30:48.280123 | orchestrator | Tuesday 03 June 2025 15:30:19 +0000 (0:00:23.949) 0:00:32.629 ********** 2025-06-03 15:30:48.280130 | orchestrator | changed: [testbed-manager] 2025-06-03 15:30:48.280138 | orchestrator | 2025-06-03 15:30:48.280146 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-03 15:30:48.280154 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-03 15:30:48.280164 | orchestrator | 2025-06-03 15:30:48.280172 | orchestrator | 2025-06-03 15:30:48.280179 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-03 15:30:48.280187 | orchestrator | Tuesday 03 June 2025 15:30:21 +0000 (0:00:01.898) 0:00:34.528 ********** 2025-06-03 15:30:48.280195 | orchestrator | =============================================================================== 2025-06-03 15:30:48.280203 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 23.95s 2025-06-03 15:30:48.280211 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 2.32s 2025-06-03 15:30:48.280218 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 2.04s 2025-06-03 15:30:48.280226 | orchestrator | osism.services.homer : Create traefik external network ------------------ 1.91s 2025-06-03 15:30:48.280234 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 1.90s 2025-06-03 15:30:48.280241 | orchestrator | osism.services.homer : Create required directories ---------------------- 1.31s 2025-06-03 15:30:48.280249 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.51s 2025-06-03 15:30:48.280263 | orchestrator | 2025-06-03 15:30:48.280271 | orchestrator | 2025-06-03 15:30:48.280278 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2025-06-03 15:30:48.280286 | orchestrator | 2025-06-03 15:30:48.280294 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2025-06-03 15:30:48.280301 | orchestrator | Tuesday 03 June 2025 15:29:47 +0000 (0:00:00.509) 0:00:00.509 ********** 2025-06-03 15:30:48.280309 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2025-06-03 15:30:48.280318 | orchestrator | 2025-06-03 15:30:48.280326 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2025-06-03 15:30:48.280334 | orchestrator | Tuesday 03 June 2025 15:29:48 +0000 (0:00:01.086) 0:00:01.595 ********** 2025-06-03 15:30:48.280342 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2025-06-03 15:30:48.280350 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2025-06-03 15:30:48.280365 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2025-06-03 15:30:48.280374 | orchestrator | 2025-06-03 15:30:48.280383 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2025-06-03 15:30:48.280391 | orchestrator | Tuesday 03 June 2025 15:29:50 +0000 (0:00:01.985) 0:00:03.580 ********** 2025-06-03 15:30:48.280401 | orchestrator | changed: [testbed-manager] 2025-06-03 15:30:48.280409 | orchestrator | 2025-06-03 15:30:48.280418 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2025-06-03 15:30:48.280427 | orchestrator | Tuesday 03 June 2025 15:29:52 +0000 (0:00:01.678) 0:00:05.259 ********** 2025-06-03 15:30:48.280447 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2025-06-03 15:30:48.280457 | orchestrator | ok: [testbed-manager] 2025-06-03 15:30:48.280470 | orchestrator | 2025-06-03 15:30:48.280483 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2025-06-03 15:30:48.280496 | orchestrator | Tuesday 03 June 2025 15:30:27 +0000 (0:00:34.920) 0:00:40.179 ********** 2025-06-03 15:30:48.280509 | orchestrator | changed: [testbed-manager] 2025-06-03 15:30:48.280524 | orchestrator | 2025-06-03 15:30:48.280538 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2025-06-03 15:30:48.280551 | orchestrator | Tuesday 03 June 2025 15:30:28 +0000 (0:00:00.717) 0:00:40.896 ********** 2025-06-03 15:30:48.280563 | orchestrator | ok: [testbed-manager] 2025-06-03 15:30:48.280624 | orchestrator | 2025-06-03 15:30:48.280634 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2025-06-03 15:30:48.280649 | orchestrator | Tuesday 03 June 2025 15:30:29 +0000 (0:00:01.083) 0:00:41.980 ********** 2025-06-03 15:30:48.280666 | orchestrator | changed: [testbed-manager] 2025-06-03 15:30:48.280680 | orchestrator | 2025-06-03 15:30:48.280693 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2025-06-03 15:30:48.280707 | orchestrator | Tuesday 03 June 2025 15:30:31 +0000 (0:00:02.355) 0:00:44.336 ********** 2025-06-03 15:30:48.280720 | orchestrator | changed: [testbed-manager] 2025-06-03 15:30:48.280732 | orchestrator | 2025-06-03 15:30:48.280748 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2025-06-03 15:30:48.280756 | orchestrator | Tuesday 03 June 2025 15:30:32 +0000 (0:00:01.212) 0:00:45.548 ********** 2025-06-03 15:30:48.280764 | orchestrator | changed: [testbed-manager] 2025-06-03 15:30:48.280785 | orchestrator | 2025-06-03 15:30:48.280793 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2025-06-03 15:30:48.280810 | orchestrator | Tuesday 03 June 2025 15:30:33 +0000 (0:00:00.520) 0:00:46.069 ********** 2025-06-03 15:30:48.280818 | orchestrator | ok: [testbed-manager] 2025-06-03 15:30:48.280831 | orchestrator | 2025-06-03 15:30:48.280844 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-03 15:30:48.280858 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-03 15:30:48.280881 | orchestrator | 2025-06-03 15:30:48.280894 | orchestrator | 2025-06-03 15:30:48.280907 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-03 15:30:48.280921 | orchestrator | Tuesday 03 June 2025 15:30:33 +0000 (0:00:00.408) 0:00:46.478 ********** 2025-06-03 15:30:48.280934 | orchestrator | =============================================================================== 2025-06-03 15:30:48.280947 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 34.92s 2025-06-03 15:30:48.280957 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 2.36s 2025-06-03 15:30:48.280964 | orchestrator | osism.services.openstackclient : Create required directories ------------ 1.99s 2025-06-03 15:30:48.280972 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 1.68s 2025-06-03 15:30:48.280980 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 1.21s 2025-06-03 15:30:48.280988 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 1.09s 2025-06-03 15:30:48.280995 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 1.08s 2025-06-03 15:30:48.281003 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 0.72s 2025-06-03 15:30:48.281011 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.52s 2025-06-03 15:30:48.281019 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.41s 2025-06-03 15:30:48.281026 | orchestrator | 2025-06-03 15:30:48.281037 | orchestrator | 2025-06-03 15:30:48.281050 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-03 15:30:48.281063 | orchestrator | 2025-06-03 15:30:48.281076 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-03 15:30:48.281089 | orchestrator | Tuesday 03 June 2025 15:29:47 +0000 (0:00:00.399) 0:00:00.399 ********** 2025-06-03 15:30:48.281101 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2025-06-03 15:30:48.281115 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2025-06-03 15:30:48.281129 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2025-06-03 15:30:48.281142 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2025-06-03 15:30:48.281155 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2025-06-03 15:30:48.281167 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2025-06-03 15:30:48.281181 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2025-06-03 15:30:48.281194 | orchestrator | 2025-06-03 15:30:48.281207 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2025-06-03 15:30:48.281220 | orchestrator | 2025-06-03 15:30:48.281234 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2025-06-03 15:30:48.281243 | orchestrator | Tuesday 03 June 2025 15:29:49 +0000 (0:00:01.989) 0:00:02.388 ********** 2025-06-03 15:30:48.281268 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-03 15:30:48.281283 | orchestrator | 2025-06-03 15:30:48.281294 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2025-06-03 15:30:48.281307 | orchestrator | Tuesday 03 June 2025 15:29:51 +0000 (0:00:01.877) 0:00:04.265 ********** 2025-06-03 15:30:48.281320 | orchestrator | ok: [testbed-manager] 2025-06-03 15:30:48.281334 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:30:48.281347 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:30:48.281359 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:30:48.281372 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:30:48.281386 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:30:48.281395 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:30:48.281403 | orchestrator | 2025-06-03 15:30:48.281421 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2025-06-03 15:30:48.281429 | orchestrator | Tuesday 03 June 2025 15:29:53 +0000 (0:00:02.419) 0:00:06.685 ********** 2025-06-03 15:30:48.281437 | orchestrator | ok: [testbed-manager] 2025-06-03 15:30:48.281445 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:30:48.281453 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:30:48.281461 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:30:48.281468 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:30:48.281476 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:30:48.281484 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:30:48.281492 | orchestrator | 2025-06-03 15:30:48.281500 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2025-06-03 15:30:48.281508 | orchestrator | Tuesday 03 June 2025 15:29:57 +0000 (0:00:03.426) 0:00:10.111 ********** 2025-06-03 15:30:48.281515 | orchestrator | changed: [testbed-manager] 2025-06-03 15:30:48.281523 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:30:48.281531 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:30:48.281539 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:30:48.281546 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:30:48.281554 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:30:48.281562 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:30:48.281590 | orchestrator | 2025-06-03 15:30:48.281602 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2025-06-03 15:30:48.281610 | orchestrator | Tuesday 03 June 2025 15:29:59 +0000 (0:00:02.262) 0:00:12.374 ********** 2025-06-03 15:30:48.281618 | orchestrator | changed: [testbed-manager] 2025-06-03 15:30:48.281625 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:30:48.281633 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:30:48.281641 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:30:48.281649 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:30:48.281656 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:30:48.281665 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:30:48.281678 | orchestrator | 2025-06-03 15:30:48.281691 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2025-06-03 15:30:48.281704 | orchestrator | Tuesday 03 June 2025 15:30:10 +0000 (0:00:10.781) 0:00:23.155 ********** 2025-06-03 15:30:48.281716 | orchestrator | changed: [testbed-manager] 2025-06-03 15:30:48.281730 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:30:48.281740 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:30:48.281753 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:30:48.281766 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:30:48.281779 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:30:48.281793 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:30:48.281806 | orchestrator | 2025-06-03 15:30:48.281820 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2025-06-03 15:30:48.281832 | orchestrator | Tuesday 03 June 2025 15:30:26 +0000 (0:00:16.676) 0:00:39.832 ********** 2025-06-03 15:30:48.281847 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-03 15:30:48.281861 | orchestrator | 2025-06-03 15:30:48.281873 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2025-06-03 15:30:48.281881 | orchestrator | Tuesday 03 June 2025 15:30:28 +0000 (0:00:01.279) 0:00:41.111 ********** 2025-06-03 15:30:48.281889 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2025-06-03 15:30:48.281901 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2025-06-03 15:30:48.281913 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2025-06-03 15:30:48.281927 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2025-06-03 15:30:48.281939 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2025-06-03 15:30:48.281953 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2025-06-03 15:30:48.281967 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2025-06-03 15:30:48.281975 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2025-06-03 15:30:48.281983 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2025-06-03 15:30:48.281990 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2025-06-03 15:30:48.281998 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2025-06-03 15:30:48.282006 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2025-06-03 15:30:48.282094 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2025-06-03 15:30:48.282112 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2025-06-03 15:30:48.282126 | orchestrator | 2025-06-03 15:30:48.282139 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2025-06-03 15:30:48.282154 | orchestrator | Tuesday 03 June 2025 15:30:33 +0000 (0:00:05.404) 0:00:46.516 ********** 2025-06-03 15:30:48.282167 | orchestrator | ok: [testbed-manager] 2025-06-03 15:30:48.282180 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:30:48.282188 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:30:48.282196 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:30:48.282204 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:30:48.282211 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:30:48.282219 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:30:48.282227 | orchestrator | 2025-06-03 15:30:48.282235 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2025-06-03 15:30:48.282243 | orchestrator | Tuesday 03 June 2025 15:30:34 +0000 (0:00:01.206) 0:00:47.722 ********** 2025-06-03 15:30:48.282251 | orchestrator | changed: [testbed-manager] 2025-06-03 15:30:48.282258 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:30:48.282266 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:30:48.282274 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:30:48.282282 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:30:48.282290 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:30:48.282298 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:30:48.282305 | orchestrator | 2025-06-03 15:30:48.282313 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2025-06-03 15:30:48.282330 | orchestrator | Tuesday 03 June 2025 15:30:36 +0000 (0:00:01.524) 0:00:49.246 ********** 2025-06-03 15:30:48.282338 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:30:48.282378 | orchestrator | ok: [testbed-manager] 2025-06-03 15:30:48.282387 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:30:48.282395 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:30:48.282403 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:30:48.282410 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:30:48.282418 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:30:48.282426 | orchestrator | 2025-06-03 15:30:48.282434 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2025-06-03 15:30:48.282442 | orchestrator | Tuesday 03 June 2025 15:30:38 +0000 (0:00:01.905) 0:00:51.152 ********** 2025-06-03 15:30:48.282449 | orchestrator | ok: [testbed-manager] 2025-06-03 15:30:48.282457 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:30:48.282465 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:30:48.282473 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:30:48.282480 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:30:48.282488 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:30:48.282496 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:30:48.282503 | orchestrator | 2025-06-03 15:30:48.282511 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2025-06-03 15:30:48.282519 | orchestrator | Tuesday 03 June 2025 15:30:40 +0000 (0:00:01.763) 0:00:52.916 ********** 2025-06-03 15:30:48.282527 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2025-06-03 15:30:48.282536 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-03 15:30:48.282552 | orchestrator | 2025-06-03 15:30:48.282561 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2025-06-03 15:30:48.282583 | orchestrator | Tuesday 03 June 2025 15:30:41 +0000 (0:00:01.274) 0:00:54.190 ********** 2025-06-03 15:30:48.282592 | orchestrator | changed: [testbed-manager] 2025-06-03 15:30:48.282600 | orchestrator | 2025-06-03 15:30:48.282608 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2025-06-03 15:30:48.282616 | orchestrator | Tuesday 03 June 2025 15:30:42 +0000 (0:00:01.711) 0:00:55.901 ********** 2025-06-03 15:30:48.282623 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:30:48.282631 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:30:48.282639 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:30:48.282647 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:30:48.282654 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:30:48.282662 | orchestrator | changed: [testbed-manager] 2025-06-03 15:30:48.282670 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:30:48.282677 | orchestrator | 2025-06-03 15:30:48.282685 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-03 15:30:48.282693 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-03 15:30:48.282702 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-03 15:30:48.282710 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-03 15:30:48.282718 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-03 15:30:48.282726 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-03 15:30:48.282734 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-03 15:30:48.282742 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-03 15:30:48.282750 | orchestrator | 2025-06-03 15:30:48.282758 | orchestrator | 2025-06-03 15:30:48.282766 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-03 15:30:48.282773 | orchestrator | Tuesday 03 June 2025 15:30:46 +0000 (0:00:03.426) 0:00:59.328 ********** 2025-06-03 15:30:48.282781 | orchestrator | =============================================================================== 2025-06-03 15:30:48.282789 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 16.68s 2025-06-03 15:30:48.282797 | orchestrator | osism.services.netdata : Add repository -------------------------------- 10.78s 2025-06-03 15:30:48.282804 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 5.40s 2025-06-03 15:30:48.282812 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 3.43s 2025-06-03 15:30:48.282822 | orchestrator | osism.services.netdata : Restart service netdata ------------------------ 3.43s 2025-06-03 15:30:48.282835 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 2.42s 2025-06-03 15:30:48.282853 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 2.26s 2025-06-03 15:30:48.282867 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.99s 2025-06-03 15:30:48.282880 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.91s 2025-06-03 15:30:48.282894 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 1.88s 2025-06-03 15:30:48.282908 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 1.76s 2025-06-03 15:30:48.282929 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 1.71s 2025-06-03 15:30:48.282953 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.52s 2025-06-03 15:30:48.282966 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.28s 2025-06-03 15:30:48.282980 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.27s 2025-06-03 15:30:48.282994 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.21s 2025-06-03 15:30:48.283007 | orchestrator | 2025-06-03 15:30:48 | INFO  | Task aa9d758e-4c93-4d11-a652-a4d1eba549e0 is in state STARTED 2025-06-03 15:30:48.283020 | orchestrator | 2025-06-03 15:30:48 | INFO  | Task 4453c69d-74aa-49ec-9ad6-cb35d65e6976 is in state STARTED 2025-06-03 15:30:48.283033 | orchestrator | 2025-06-03 15:30:48 | INFO  | Task 277b44e5-5d14-42fe-990d-1929b436e41f is in state SUCCESS 2025-06-03 15:30:48.283041 | orchestrator | 2025-06-03 15:30:48 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:30:51.319872 | orchestrator | 2025-06-03 15:30:51 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:30:51.321350 | orchestrator | 2025-06-03 15:30:51 | INFO  | Task bddd135d-8498-40f3-8a22-9627c5919e72 is in state STARTED 2025-06-03 15:30:51.322444 | orchestrator | 2025-06-03 15:30:51 | INFO  | Task aa9d758e-4c93-4d11-a652-a4d1eba549e0 is in state STARTED 2025-06-03 15:30:51.324258 | orchestrator | 2025-06-03 15:30:51 | INFO  | Task 4453c69d-74aa-49ec-9ad6-cb35d65e6976 is in state STARTED 2025-06-03 15:30:51.324298 | orchestrator | 2025-06-03 15:30:51 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:30:54.370801 | orchestrator | 2025-06-03 15:30:54 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:30:54.371884 | orchestrator | 2025-06-03 15:30:54 | INFO  | Task bddd135d-8498-40f3-8a22-9627c5919e72 is in state STARTED 2025-06-03 15:30:54.374469 | orchestrator | 2025-06-03 15:30:54 | INFO  | Task aa9d758e-4c93-4d11-a652-a4d1eba549e0 is in state STARTED 2025-06-03 15:30:54.376390 | orchestrator | 2025-06-03 15:30:54 | INFO  | Task 4453c69d-74aa-49ec-9ad6-cb35d65e6976 is in state STARTED 2025-06-03 15:30:54.376427 | orchestrator | 2025-06-03 15:30:54 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:30:57.411718 | orchestrator | 2025-06-03 15:30:57 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:30:57.411816 | orchestrator | 2025-06-03 15:30:57 | INFO  | Task bddd135d-8498-40f3-8a22-9627c5919e72 is in state STARTED 2025-06-03 15:30:57.412881 | orchestrator | 2025-06-03 15:30:57 | INFO  | Task aa9d758e-4c93-4d11-a652-a4d1eba549e0 is in state STARTED 2025-06-03 15:30:57.414168 | orchestrator | 2025-06-03 15:30:57 | INFO  | Task 4453c69d-74aa-49ec-9ad6-cb35d65e6976 is in state STARTED 2025-06-03 15:30:57.414219 | orchestrator | 2025-06-03 15:30:57 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:31:00.454851 | orchestrator | 2025-06-03 15:31:00 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:31:00.454963 | orchestrator | 2025-06-03 15:31:00 | INFO  | Task bddd135d-8498-40f3-8a22-9627c5919e72 is in state STARTED 2025-06-03 15:31:00.455249 | orchestrator | 2025-06-03 15:31:00 | INFO  | Task aa9d758e-4c93-4d11-a652-a4d1eba549e0 is in state STARTED 2025-06-03 15:31:00.456394 | orchestrator | 2025-06-03 15:31:00 | INFO  | Task 4453c69d-74aa-49ec-9ad6-cb35d65e6976 is in state STARTED 2025-06-03 15:31:00.456441 | orchestrator | 2025-06-03 15:31:00 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:31:03.499478 | orchestrator | 2025-06-03 15:31:03 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:31:03.499632 | orchestrator | 2025-06-03 15:31:03 | INFO  | Task bddd135d-8498-40f3-8a22-9627c5919e72 is in state STARTED 2025-06-03 15:31:03.499652 | orchestrator | 2025-06-03 15:31:03 | INFO  | Task aa9d758e-4c93-4d11-a652-a4d1eba549e0 is in state STARTED 2025-06-03 15:31:03.502062 | orchestrator | 2025-06-03 15:31:03 | INFO  | Task 4453c69d-74aa-49ec-9ad6-cb35d65e6976 is in state STARTED 2025-06-03 15:31:03.502119 | orchestrator | 2025-06-03 15:31:03 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:31:06.610463 | orchestrator | 2025-06-03 15:31:06 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:31:06.610561 | orchestrator | 2025-06-03 15:31:06 | INFO  | Task bddd135d-8498-40f3-8a22-9627c5919e72 is in state STARTED 2025-06-03 15:31:06.610572 | orchestrator | 2025-06-03 15:31:06 | INFO  | Task aa9d758e-4c93-4d11-a652-a4d1eba549e0 is in state STARTED 2025-06-03 15:31:06.610580 | orchestrator | 2025-06-03 15:31:06 | INFO  | Task 4453c69d-74aa-49ec-9ad6-cb35d65e6976 is in state STARTED 2025-06-03 15:31:06.610611 | orchestrator | 2025-06-03 15:31:06 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:31:09.652796 | orchestrator | 2025-06-03 15:31:09 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:31:09.654107 | orchestrator | 2025-06-03 15:31:09 | INFO  | Task bddd135d-8498-40f3-8a22-9627c5919e72 is in state STARTED 2025-06-03 15:31:09.655749 | orchestrator | 2025-06-03 15:31:09 | INFO  | Task aa9d758e-4c93-4d11-a652-a4d1eba549e0 is in state STARTED 2025-06-03 15:31:09.657271 | orchestrator | 2025-06-03 15:31:09 | INFO  | Task 4453c69d-74aa-49ec-9ad6-cb35d65e6976 is in state STARTED 2025-06-03 15:31:09.657315 | orchestrator | 2025-06-03 15:31:09 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:31:12.727307 | orchestrator | 2025-06-03 15:31:12 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:31:12.730057 | orchestrator | 2025-06-03 15:31:12 | INFO  | Task bddd135d-8498-40f3-8a22-9627c5919e72 is in state STARTED 2025-06-03 15:31:12.734254 | orchestrator | 2025-06-03 15:31:12 | INFO  | Task aa9d758e-4c93-4d11-a652-a4d1eba549e0 is in state STARTED 2025-06-03 15:31:12.736551 | orchestrator | 2025-06-03 15:31:12 | INFO  | Task 4453c69d-74aa-49ec-9ad6-cb35d65e6976 is in state STARTED 2025-06-03 15:31:12.736659 | orchestrator | 2025-06-03 15:31:12 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:31:15.785954 | orchestrator | 2025-06-03 15:31:15 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:31:15.790937 | orchestrator | 2025-06-03 15:31:15 | INFO  | Task bddd135d-8498-40f3-8a22-9627c5919e72 is in state STARTED 2025-06-03 15:31:15.792434 | orchestrator | 2025-06-03 15:31:15 | INFO  | Task aa9d758e-4c93-4d11-a652-a4d1eba549e0 is in state STARTED 2025-06-03 15:31:15.794088 | orchestrator | 2025-06-03 15:31:15 | INFO  | Task 4453c69d-74aa-49ec-9ad6-cb35d65e6976 is in state STARTED 2025-06-03 15:31:15.794118 | orchestrator | 2025-06-03 15:31:15 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:31:18.853011 | orchestrator | 2025-06-03 15:31:18 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:31:18.853130 | orchestrator | 2025-06-03 15:31:18 | INFO  | Task bddd135d-8498-40f3-8a22-9627c5919e72 is in state STARTED 2025-06-03 15:31:18.853774 | orchestrator | 2025-06-03 15:31:18 | INFO  | Task aa9d758e-4c93-4d11-a652-a4d1eba549e0 is in state STARTED 2025-06-03 15:31:18.854717 | orchestrator | 2025-06-03 15:31:18 | INFO  | Task 4453c69d-74aa-49ec-9ad6-cb35d65e6976 is in state STARTED 2025-06-03 15:31:18.855080 | orchestrator | 2025-06-03 15:31:18 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:31:21.916870 | orchestrator | 2025-06-03 15:31:21 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:31:21.919117 | orchestrator | 2025-06-03 15:31:21 | INFO  | Task bddd135d-8498-40f3-8a22-9627c5919e72 is in state STARTED 2025-06-03 15:31:21.922250 | orchestrator | 2025-06-03 15:31:21 | INFO  | Task aa9d758e-4c93-4d11-a652-a4d1eba549e0 is in state STARTED 2025-06-03 15:31:21.925838 | orchestrator | 2025-06-03 15:31:21 | INFO  | Task 4453c69d-74aa-49ec-9ad6-cb35d65e6976 is in state STARTED 2025-06-03 15:31:21.926169 | orchestrator | 2025-06-03 15:31:21 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:31:24.966979 | orchestrator | 2025-06-03 15:31:24 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:31:24.968558 | orchestrator | 2025-06-03 15:31:24 | INFO  | Task bddd135d-8498-40f3-8a22-9627c5919e72 is in state STARTED 2025-06-03 15:31:24.969547 | orchestrator | 2025-06-03 15:31:24 | INFO  | Task aa9d758e-4c93-4d11-a652-a4d1eba549e0 is in state STARTED 2025-06-03 15:31:24.970727 | orchestrator | 2025-06-03 15:31:24 | INFO  | Task 4453c69d-74aa-49ec-9ad6-cb35d65e6976 is in state STARTED 2025-06-03 15:31:24.970756 | orchestrator | 2025-06-03 15:31:24 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:31:28.025195 | orchestrator | 2025-06-03 15:31:28 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:31:28.025770 | orchestrator | 2025-06-03 15:31:28 | INFO  | Task bddd135d-8498-40f3-8a22-9627c5919e72 is in state STARTED 2025-06-03 15:31:28.027297 | orchestrator | 2025-06-03 15:31:28 | INFO  | Task aa9d758e-4c93-4d11-a652-a4d1eba549e0 is in state STARTED 2025-06-03 15:31:28.028189 | orchestrator | 2025-06-03 15:31:28 | INFO  | Task 4453c69d-74aa-49ec-9ad6-cb35d65e6976 is in state STARTED 2025-06-03 15:31:28.028256 | orchestrator | 2025-06-03 15:31:28 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:31:31.091464 | orchestrator | 2025-06-03 15:31:31 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:31:31.091532 | orchestrator | 2025-06-03 15:31:31 | INFO  | Task bddd135d-8498-40f3-8a22-9627c5919e72 is in state STARTED 2025-06-03 15:31:31.096462 | orchestrator | 2025-06-03 15:31:31 | INFO  | Task aa9d758e-4c93-4d11-a652-a4d1eba549e0 is in state STARTED 2025-06-03 15:31:31.100996 | orchestrator | 2025-06-03 15:31:31 | INFO  | Task 4453c69d-74aa-49ec-9ad6-cb35d65e6976 is in state STARTED 2025-06-03 15:31:31.101028 | orchestrator | 2025-06-03 15:31:31 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:31:34.156035 | orchestrator | 2025-06-03 15:31:34 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:31:34.156808 | orchestrator | 2025-06-03 15:31:34 | INFO  | Task bddd135d-8498-40f3-8a22-9627c5919e72 is in state STARTED 2025-06-03 15:31:34.157694 | orchestrator | 2025-06-03 15:31:34 | INFO  | Task aa9d758e-4c93-4d11-a652-a4d1eba549e0 is in state STARTED 2025-06-03 15:31:34.158649 | orchestrator | 2025-06-03 15:31:34 | INFO  | Task 4453c69d-74aa-49ec-9ad6-cb35d65e6976 is in state STARTED 2025-06-03 15:31:34.158724 | orchestrator | 2025-06-03 15:31:34 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:31:37.203089 | orchestrator | 2025-06-03 15:31:37 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:31:37.203180 | orchestrator | 2025-06-03 15:31:37 | INFO  | Task bddd135d-8498-40f3-8a22-9627c5919e72 is in state STARTED 2025-06-03 15:31:37.205170 | orchestrator | 2025-06-03 15:31:37 | INFO  | Task aa9d758e-4c93-4d11-a652-a4d1eba549e0 is in state STARTED 2025-06-03 15:31:37.206908 | orchestrator | 2025-06-03 15:31:37 | INFO  | Task 4453c69d-74aa-49ec-9ad6-cb35d65e6976 is in state STARTED 2025-06-03 15:31:37.208074 | orchestrator | 2025-06-03 15:31:37 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:31:40.253447 | orchestrator | 2025-06-03 15:31:40 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:31:40.253761 | orchestrator | 2025-06-03 15:31:40 | INFO  | Task bddd135d-8498-40f3-8a22-9627c5919e72 is in state STARTED 2025-06-03 15:31:40.255257 | orchestrator | 2025-06-03 15:31:40 | INFO  | Task aa9d758e-4c93-4d11-a652-a4d1eba549e0 is in state STARTED 2025-06-03 15:31:40.256386 | orchestrator | 2025-06-03 15:31:40 | INFO  | Task 4453c69d-74aa-49ec-9ad6-cb35d65e6976 is in state STARTED 2025-06-03 15:31:40.256422 | orchestrator | 2025-06-03 15:31:40 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:31:43.298210 | orchestrator | 2025-06-03 15:31:43 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:31:43.298578 | orchestrator | 2025-06-03 15:31:43 | INFO  | Task bddd135d-8498-40f3-8a22-9627c5919e72 is in state STARTED 2025-06-03 15:31:43.299160 | orchestrator | 2025-06-03 15:31:43 | INFO  | Task aa9d758e-4c93-4d11-a652-a4d1eba549e0 is in state STARTED 2025-06-03 15:31:43.300920 | orchestrator | 2025-06-03 15:31:43 | INFO  | Task 4453c69d-74aa-49ec-9ad6-cb35d65e6976 is in state STARTED 2025-06-03 15:31:43.300955 | orchestrator | 2025-06-03 15:31:43 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:31:46.339599 | orchestrator | 2025-06-03 15:31:46 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:31:46.340205 | orchestrator | 2025-06-03 15:31:46 | INFO  | Task bddd135d-8498-40f3-8a22-9627c5919e72 is in state STARTED 2025-06-03 15:31:46.341515 | orchestrator | 2025-06-03 15:31:46 | INFO  | Task aa9d758e-4c93-4d11-a652-a4d1eba549e0 is in state STARTED 2025-06-03 15:31:46.343872 | orchestrator | 2025-06-03 15:31:46 | INFO  | Task 4453c69d-74aa-49ec-9ad6-cb35d65e6976 is in state STARTED 2025-06-03 15:31:46.343911 | orchestrator | 2025-06-03 15:31:46 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:31:49.385495 | orchestrator | 2025-06-03 15:31:49 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:31:49.385601 | orchestrator | 2025-06-03 15:31:49 | INFO  | Task bddd135d-8498-40f3-8a22-9627c5919e72 is in state STARTED 2025-06-03 15:31:49.386280 | orchestrator | 2025-06-03 15:31:49 | INFO  | Task aa9d758e-4c93-4d11-a652-a4d1eba549e0 is in state STARTED 2025-06-03 15:31:49.386445 | orchestrator | 2025-06-03 15:31:49 | INFO  | Task 4453c69d-74aa-49ec-9ad6-cb35d65e6976 is in state STARTED 2025-06-03 15:31:49.386466 | orchestrator | 2025-06-03 15:31:49 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:31:52.431259 | orchestrator | 2025-06-03 15:31:52 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:31:52.432600 | orchestrator | 2025-06-03 15:31:52 | INFO  | Task bddd135d-8498-40f3-8a22-9627c5919e72 is in state STARTED 2025-06-03 15:31:52.433405 | orchestrator | 2025-06-03 15:31:52 | INFO  | Task aa9d758e-4c93-4d11-a652-a4d1eba549e0 is in state STARTED 2025-06-03 15:31:52.434529 | orchestrator | 2025-06-03 15:31:52 | INFO  | Task 4453c69d-74aa-49ec-9ad6-cb35d65e6976 is in state STARTED 2025-06-03 15:31:52.435384 | orchestrator | 2025-06-03 15:31:52 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:31:55.485391 | orchestrator | 2025-06-03 15:31:55 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:31:55.487369 | orchestrator | 2025-06-03 15:31:55 | INFO  | Task bddd135d-8498-40f3-8a22-9627c5919e72 is in state STARTED 2025-06-03 15:31:55.489293 | orchestrator | 2025-06-03 15:31:55 | INFO  | Task aa9d758e-4c93-4d11-a652-a4d1eba549e0 is in state STARTED 2025-06-03 15:31:55.491430 | orchestrator | 2025-06-03 15:31:55 | INFO  | Task 4453c69d-74aa-49ec-9ad6-cb35d65e6976 is in state STARTED 2025-06-03 15:31:55.491517 | orchestrator | 2025-06-03 15:31:55 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:31:58.531886 | orchestrator | 2025-06-03 15:31:58 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:31:58.531984 | orchestrator | 2025-06-03 15:31:58 | INFO  | Task bddd135d-8498-40f3-8a22-9627c5919e72 is in state SUCCESS 2025-06-03 15:31:58.535442 | orchestrator | 2025-06-03 15:31:58 | INFO  | Task aa9d758e-4c93-4d11-a652-a4d1eba549e0 is in state STARTED 2025-06-03 15:31:58.536146 | orchestrator | 2025-06-03 15:31:58 | INFO  | Task 4453c69d-74aa-49ec-9ad6-cb35d65e6976 is in state STARTED 2025-06-03 15:31:58.536230 | orchestrator | 2025-06-03 15:31:58 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:32:01.577985 | orchestrator | 2025-06-03 15:32:01 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:32:01.578793 | orchestrator | 2025-06-03 15:32:01 | INFO  | Task aa9d758e-4c93-4d11-a652-a4d1eba549e0 is in state STARTED 2025-06-03 15:32:01.581293 | orchestrator | 2025-06-03 15:32:01 | INFO  | Task 4453c69d-74aa-49ec-9ad6-cb35d65e6976 is in state STARTED 2025-06-03 15:32:01.581324 | orchestrator | 2025-06-03 15:32:01 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:32:04.630479 | orchestrator | 2025-06-03 15:32:04 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:32:04.631245 | orchestrator | 2025-06-03 15:32:04 | INFO  | Task aa9d758e-4c93-4d11-a652-a4d1eba549e0 is in state STARTED 2025-06-03 15:32:04.631887 | orchestrator | 2025-06-03 15:32:04 | INFO  | Task 4453c69d-74aa-49ec-9ad6-cb35d65e6976 is in state STARTED 2025-06-03 15:32:04.631997 | orchestrator | 2025-06-03 15:32:04 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:32:07.679190 | orchestrator | 2025-06-03 15:32:07 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:32:07.680353 | orchestrator | 2025-06-03 15:32:07 | INFO  | Task aa9d758e-4c93-4d11-a652-a4d1eba549e0 is in state STARTED 2025-06-03 15:32:07.681072 | orchestrator | 2025-06-03 15:32:07 | INFO  | Task 4453c69d-74aa-49ec-9ad6-cb35d65e6976 is in state STARTED 2025-06-03 15:32:07.681175 | orchestrator | 2025-06-03 15:32:07 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:32:10.719256 | orchestrator | 2025-06-03 15:32:10 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:32:10.719348 | orchestrator | 2025-06-03 15:32:10 | INFO  | Task aa9d758e-4c93-4d11-a652-a4d1eba549e0 is in state STARTED 2025-06-03 15:32:10.720133 | orchestrator | 2025-06-03 15:32:10 | INFO  | Task 4453c69d-74aa-49ec-9ad6-cb35d65e6976 is in state STARTED 2025-06-03 15:32:10.720159 | orchestrator | 2025-06-03 15:32:10 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:32:13.770863 | orchestrator | 2025-06-03 15:32:13 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:32:13.772882 | orchestrator | 2025-06-03 15:32:13 | INFO  | Task aa9d758e-4c93-4d11-a652-a4d1eba549e0 is in state STARTED 2025-06-03 15:32:13.773145 | orchestrator | 2025-06-03 15:32:13 | INFO  | Task a9ef547f-fcf5-4839-9375-f5126546143c is in state STARTED 2025-06-03 15:32:13.774048 | orchestrator | 2025-06-03 15:32:13 | INFO  | Task a5a20b96-84cd-4464-8800-d7d7be04cad5 is in state STARTED 2025-06-03 15:32:13.780204 | orchestrator | 2025-06-03 15:32:13 | INFO  | Task 4453c69d-74aa-49ec-9ad6-cb35d65e6976 is in state SUCCESS 2025-06-03 15:32:13.782650 | orchestrator | 2025-06-03 15:32:13.782712 | orchestrator | 2025-06-03 15:32:13.782723 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2025-06-03 15:32:13.782732 | orchestrator | 2025-06-03 15:32:13.782741 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2025-06-03 15:32:13.782751 | orchestrator | Tuesday 03 June 2025 15:30:08 +0000 (0:00:00.213) 0:00:00.213 ********** 2025-06-03 15:32:13.782760 | orchestrator | ok: [testbed-manager] 2025-06-03 15:32:13.782857 | orchestrator | 2025-06-03 15:32:13.782869 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2025-06-03 15:32:13.782878 | orchestrator | Tuesday 03 June 2025 15:30:09 +0000 (0:00:00.831) 0:00:01.045 ********** 2025-06-03 15:32:13.782920 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2025-06-03 15:32:13.782930 | orchestrator | 2025-06-03 15:32:13.782939 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2025-06-03 15:32:13.782948 | orchestrator | Tuesday 03 June 2025 15:30:09 +0000 (0:00:00.561) 0:00:01.607 ********** 2025-06-03 15:32:13.782956 | orchestrator | changed: [testbed-manager] 2025-06-03 15:32:13.782965 | orchestrator | 2025-06-03 15:32:13.782974 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2025-06-03 15:32:13.782983 | orchestrator | Tuesday 03 June 2025 15:30:11 +0000 (0:00:01.910) 0:00:03.517 ********** 2025-06-03 15:32:13.782992 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2025-06-03 15:32:13.783001 | orchestrator | ok: [testbed-manager] 2025-06-03 15:32:13.783009 | orchestrator | 2025-06-03 15:32:13.783018 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2025-06-03 15:32:13.783026 | orchestrator | Tuesday 03 June 2025 15:31:51 +0000 (0:01:39.802) 0:01:43.320 ********** 2025-06-03 15:32:13.783035 | orchestrator | changed: [testbed-manager] 2025-06-03 15:32:13.783043 | orchestrator | 2025-06-03 15:32:13.783052 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-03 15:32:13.783061 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-03 15:32:13.783071 | orchestrator | 2025-06-03 15:32:13.783080 | orchestrator | 2025-06-03 15:32:13.783088 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-03 15:32:13.783097 | orchestrator | Tuesday 03 June 2025 15:31:55 +0000 (0:00:04.098) 0:01:47.419 ********** 2025-06-03 15:32:13.783105 | orchestrator | =============================================================================== 2025-06-03 15:32:13.783114 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 99.80s 2025-06-03 15:32:13.783122 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 4.10s 2025-06-03 15:32:13.783131 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 1.91s 2025-06-03 15:32:13.783139 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 0.83s 2025-06-03 15:32:13.783148 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.56s 2025-06-03 15:32:13.783157 | orchestrator | 2025-06-03 15:32:13.783165 | orchestrator | 2025-06-03 15:32:13.783174 | orchestrator | PLAY [Apply role common] ******************************************************* 2025-06-03 15:32:13.783182 | orchestrator | 2025-06-03 15:32:13.783191 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-06-03 15:32:13.783200 | orchestrator | Tuesday 03 June 2025 15:29:40 +0000 (0:00:00.264) 0:00:00.264 ********** 2025-06-03 15:32:13.783210 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-03 15:32:13.783221 | orchestrator | 2025-06-03 15:32:13.783231 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2025-06-03 15:32:13.783262 | orchestrator | Tuesday 03 June 2025 15:29:41 +0000 (0:00:01.218) 0:00:01.483 ********** 2025-06-03 15:32:13.783272 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2025-06-03 15:32:13.783281 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2025-06-03 15:32:13.783291 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2025-06-03 15:32:13.783300 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2025-06-03 15:32:13.783310 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-06-03 15:32:13.783320 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-06-03 15:32:13.783330 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-06-03 15:32:13.783340 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2025-06-03 15:32:13.783350 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-06-03 15:32:13.783360 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-06-03 15:32:13.783372 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-06-03 15:32:13.783381 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2025-06-03 15:32:13.783392 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-06-03 15:32:13.783402 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2025-06-03 15:32:13.783411 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-06-03 15:32:13.783421 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-06-03 15:32:13.783444 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-06-03 15:32:13.783454 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-06-03 15:32:13.783464 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-06-03 15:32:13.783474 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-06-03 15:32:13.783484 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-06-03 15:32:13.783494 | orchestrator | 2025-06-03 15:32:13.783504 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-06-03 15:32:13.783513 | orchestrator | Tuesday 03 June 2025 15:29:46 +0000 (0:00:04.440) 0:00:05.923 ********** 2025-06-03 15:32:13.783523 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-03 15:32:13.783535 | orchestrator | 2025-06-03 15:32:13.783545 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2025-06-03 15:32:13.783599 | orchestrator | Tuesday 03 June 2025 15:29:47 +0000 (0:00:01.312) 0:00:07.235 ********** 2025-06-03 15:32:13.783645 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-03 15:32:13.783693 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-03 15:32:13.783724 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-03 15:32:13.783734 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-03 15:32:13.783748 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:32:13.783771 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:32:13.783781 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:32:13.783791 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-03 15:32:13.783801 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:32:13.783828 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:32:13.783838 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-03 15:32:13.783848 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:32:13.783861 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-03 15:32:13.783890 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:32:13.783900 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:32:13.783910 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:32:13.783919 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:32:13.783938 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:32:13.783947 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:32:13.783957 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:32:13.783970 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:32:13.783979 | orchestrator | 2025-06-03 15:32:13.783989 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2025-06-03 15:32:13.783998 | orchestrator | Tuesday 03 June 2025 15:29:52 +0000 (0:00:05.194) 0:00:12.430 ********** 2025-06-03 15:32:13.784012 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-03 15:32:13.784022 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:32:13.784031 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:32:13.784046 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:32:13.784055 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-03 15:32:13.784065 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:32:13.784074 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:32:13.784083 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-03 15:32:13.784096 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:32:13.784106 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:32:13.784121 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:32:13.784130 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:32:13.784140 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-03 15:32:13.784154 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:32:13.784164 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:32:13.784173 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-03 15:32:13.784182 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:32:13.784191 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:32:13.784200 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:32:13.784214 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:32:13.784223 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-03 15:32:13.784237 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:32:13.784254 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:32:13.784263 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:32:13.784272 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-03 15:32:13.784281 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:32:13.784290 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:32:13.784299 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:32:13.784308 | orchestrator | 2025-06-03 15:32:13.784317 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2025-06-03 15:32:13.784326 | orchestrator | Tuesday 03 June 2025 15:29:54 +0000 (0:00:01.188) 0:00:13.619 ********** 2025-06-03 15:32:13.784335 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-03 15:32:13.784349 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:32:13.784363 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:32:13.784378 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:32:13.784387 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-03 15:32:13.784397 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:32:13.784406 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:32:13.784415 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:32:13.784424 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-03 15:32:13.784433 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:32:13.784442 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:32:13.784452 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-03 15:32:13.784907 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:32:13.784928 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:32:13.784937 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:32:13.784946 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:32:13.784955 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-03 15:32:13.784964 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:32:13.784973 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:32:13.784982 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:32:13.784997 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-03 15:32:13.785011 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:32:13.785034 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:32:13.785044 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:32:13.785053 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-03 15:32:13.785062 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:32:13.785071 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:32:13.785080 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:32:13.785089 | orchestrator | 2025-06-03 15:32:13.785097 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2025-06-03 15:32:13.785106 | orchestrator | Tuesday 03 June 2025 15:29:56 +0000 (0:00:02.954) 0:00:16.573 ********** 2025-06-03 15:32:13.785115 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:32:13.785124 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:32:13.785133 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:32:13.785149 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:32:13.785163 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:32:13.785177 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:32:13.785190 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:32:13.785204 | orchestrator | 2025-06-03 15:32:13.785220 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2025-06-03 15:32:13.785234 | orchestrator | Tuesday 03 June 2025 15:29:58 +0000 (0:00:01.058) 0:00:17.632 ********** 2025-06-03 15:32:13.785249 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:32:13.785260 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:32:13.785268 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:32:13.785277 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:32:13.785285 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:32:13.785294 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:32:13.785302 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:32:13.785311 | orchestrator | 2025-06-03 15:32:13.785319 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2025-06-03 15:32:13.785328 | orchestrator | Tuesday 03 June 2025 15:29:59 +0000 (0:00:01.378) 0:00:19.011 ********** 2025-06-03 15:32:13.785345 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-03 15:32:13.785359 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-03 15:32:13.785378 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-03 15:32:13.785387 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:32:13.785397 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-03 15:32:13.785424 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:32:13.785433 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-03 15:32:13.785443 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:32:13.785459 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:32:13.785473 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-03 15:32:13.785487 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:32:13.785497 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-03 15:32:13.785506 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:32:13.785516 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:32:13.785526 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:32:13.785543 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:32:13.785553 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:32:13.785576 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:32:13.785587 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:32:13.785598 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:32:13.785608 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:32:13.785640 | orchestrator | 2025-06-03 15:32:13.785651 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2025-06-03 15:32:13.785661 | orchestrator | Tuesday 03 June 2025 15:30:06 +0000 (0:00:06.945) 0:00:25.956 ********** 2025-06-03 15:32:13.785671 | orchestrator | [WARNING]: Skipped 2025-06-03 15:32:13.785681 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2025-06-03 15:32:13.785691 | orchestrator | to this access issue: 2025-06-03 15:32:13.785701 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2025-06-03 15:32:13.785710 | orchestrator | directory 2025-06-03 15:32:13.785721 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-03 15:32:13.785732 | orchestrator | 2025-06-03 15:32:13.785745 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2025-06-03 15:32:13.785764 | orchestrator | Tuesday 03 June 2025 15:30:07 +0000 (0:00:01.235) 0:00:27.192 ********** 2025-06-03 15:32:13.785777 | orchestrator | [WARNING]: Skipped 2025-06-03 15:32:13.785789 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2025-06-03 15:32:13.785802 | orchestrator | to this access issue: 2025-06-03 15:32:13.785814 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2025-06-03 15:32:13.785826 | orchestrator | directory 2025-06-03 15:32:13.785840 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-03 15:32:13.785852 | orchestrator | 2025-06-03 15:32:13.785864 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2025-06-03 15:32:13.785876 | orchestrator | Tuesday 03 June 2025 15:30:08 +0000 (0:00:00.619) 0:00:27.812 ********** 2025-06-03 15:32:13.785889 | orchestrator | [WARNING]: Skipped 2025-06-03 15:32:13.785900 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2025-06-03 15:32:13.785911 | orchestrator | to this access issue: 2025-06-03 15:32:13.785922 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2025-06-03 15:32:13.785933 | orchestrator | directory 2025-06-03 15:32:13.785943 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-03 15:32:13.785954 | orchestrator | 2025-06-03 15:32:13.785965 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2025-06-03 15:32:13.785975 | orchestrator | Tuesday 03 June 2025 15:30:08 +0000 (0:00:00.724) 0:00:28.536 ********** 2025-06-03 15:32:13.785986 | orchestrator | [WARNING]: Skipped 2025-06-03 15:32:13.785997 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2025-06-03 15:32:13.786008 | orchestrator | to this access issue: 2025-06-03 15:32:13.786096 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2025-06-03 15:32:13.786108 | orchestrator | directory 2025-06-03 15:32:13.786119 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-03 15:32:13.786129 | orchestrator | 2025-06-03 15:32:13.786141 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2025-06-03 15:32:13.786152 | orchestrator | Tuesday 03 June 2025 15:30:09 +0000 (0:00:00.720) 0:00:29.257 ********** 2025-06-03 15:32:13.786163 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:32:13.786173 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:32:13.786184 | orchestrator | changed: [testbed-manager] 2025-06-03 15:32:13.786195 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:32:13.786206 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:32:13.786222 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:32:13.786233 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:32:13.786243 | orchestrator | 2025-06-03 15:32:13.786254 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2025-06-03 15:32:13.786265 | orchestrator | Tuesday 03 June 2025 15:30:15 +0000 (0:00:05.703) 0:00:34.960 ********** 2025-06-03 15:32:13.786276 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-06-03 15:32:13.786288 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-06-03 15:32:13.786299 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-06-03 15:32:13.786318 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-06-03 15:32:13.786329 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-06-03 15:32:13.786340 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-06-03 15:32:13.786351 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-06-03 15:32:13.786362 | orchestrator | 2025-06-03 15:32:13.786373 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2025-06-03 15:32:13.786391 | orchestrator | Tuesday 03 June 2025 15:30:17 +0000 (0:00:02.582) 0:00:37.542 ********** 2025-06-03 15:32:13.786402 | orchestrator | changed: [testbed-manager] 2025-06-03 15:32:13.786414 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:32:13.786424 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:32:13.786435 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:32:13.786446 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:32:13.786456 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:32:13.786468 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:32:13.786478 | orchestrator | 2025-06-03 15:32:13.786489 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2025-06-03 15:32:13.786500 | orchestrator | Tuesday 03 June 2025 15:30:20 +0000 (0:00:03.053) 0:00:40.596 ********** 2025-06-03 15:32:13.786512 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-03 15:32:13.786524 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:32:13.786535 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-03 15:32:13.786547 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:32:13.786563 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:32:13.786582 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:32:13.786600 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-03 15:32:13.786612 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-03 15:32:13.786645 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:32:13.786657 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:32:13.786668 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-03 15:32:13.786679 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:32:13.786695 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:32:13.786713 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:32:13.786731 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-03 15:32:13.786744 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-03 15:32:13.786755 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:32:13.786766 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:32:13.786778 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:32:13.786789 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:32:13.786805 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:32:13.786823 | orchestrator | 2025-06-03 15:32:13.786835 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2025-06-03 15:32:13.786846 | orchestrator | Tuesday 03 June 2025 15:30:23 +0000 (0:00:02.516) 0:00:43.113 ********** 2025-06-03 15:32:13.786857 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-06-03 15:32:13.786868 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-06-03 15:32:13.786879 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-06-03 15:32:13.786899 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-06-03 15:32:13.786911 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-06-03 15:32:13.786922 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-06-03 15:32:13.786933 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-06-03 15:32:13.786944 | orchestrator | 2025-06-03 15:32:13.786955 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2025-06-03 15:32:13.786966 | orchestrator | Tuesday 03 June 2025 15:30:26 +0000 (0:00:02.535) 0:00:45.649 ********** 2025-06-03 15:32:13.786976 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-06-03 15:32:13.786987 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-06-03 15:32:13.786998 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-06-03 15:32:13.787009 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-06-03 15:32:13.787019 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-06-03 15:32:13.787030 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-06-03 15:32:13.787041 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-06-03 15:32:13.787052 | orchestrator | 2025-06-03 15:32:13.787062 | orchestrator | TASK [common : Check common containers] **************************************** 2025-06-03 15:32:13.787073 | orchestrator | Tuesday 03 June 2025 15:30:28 +0000 (0:00:02.169) 0:00:47.818 ********** 2025-06-03 15:32:13.787084 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-03 15:32:13.787096 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-03 15:32:13.787108 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-03 15:32:13.787126 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-03 15:32:13.787143 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-03 15:32:13.787162 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:32:13.787174 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:32:13.787185 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-03 15:32:13.787197 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:32:13.787208 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:32:13.787226 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-03 15:32:13.787242 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:32:13.787261 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:32:13.787273 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:32:13.787285 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:32:13.787296 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:32:13.787308 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:32:13.787320 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:32:13.787337 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:32:13.787357 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:32:13.787369 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:32:13.787380 | orchestrator | 2025-06-03 15:32:13.787396 | orchestrator | TASK [common : Creating log volume] ******************************************** 2025-06-03 15:32:13.787408 | orchestrator | Tuesday 03 June 2025 15:30:31 +0000 (0:00:03.196) 0:00:51.015 ********** 2025-06-03 15:32:13.787419 | orchestrator | changed: [testbed-manager] 2025-06-03 15:32:13.787430 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:32:13.787440 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:32:13.787451 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:32:13.787462 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:32:13.787473 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:32:13.787483 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:32:13.787494 | orchestrator | 2025-06-03 15:32:13.787505 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2025-06-03 15:32:13.787516 | orchestrator | Tuesday 03 June 2025 15:30:33 +0000 (0:00:01.974) 0:00:52.989 ********** 2025-06-03 15:32:13.787527 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:32:13.787538 | orchestrator | changed: [testbed-manager] 2025-06-03 15:32:13.787549 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:32:13.787560 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:32:13.787570 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:32:13.787581 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:32:13.787592 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:32:13.787603 | orchestrator | 2025-06-03 15:32:13.787613 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-06-03 15:32:13.787674 | orchestrator | Tuesday 03 June 2025 15:30:34 +0000 (0:00:01.406) 0:00:54.396 ********** 2025-06-03 15:32:13.787686 | orchestrator | 2025-06-03 15:32:13.787697 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-06-03 15:32:13.787708 | orchestrator | Tuesday 03 June 2025 15:30:34 +0000 (0:00:00.208) 0:00:54.605 ********** 2025-06-03 15:32:13.787719 | orchestrator | 2025-06-03 15:32:13.787730 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-06-03 15:32:13.787740 | orchestrator | Tuesday 03 June 2025 15:30:35 +0000 (0:00:00.097) 0:00:54.702 ********** 2025-06-03 15:32:13.787751 | orchestrator | 2025-06-03 15:32:13.787761 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-06-03 15:32:13.787773 | orchestrator | Tuesday 03 June 2025 15:30:35 +0000 (0:00:00.064) 0:00:54.767 ********** 2025-06-03 15:32:13.787791 | orchestrator | 2025-06-03 15:32:13.787803 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-06-03 15:32:13.787813 | orchestrator | Tuesday 03 June 2025 15:30:35 +0000 (0:00:00.065) 0:00:54.832 ********** 2025-06-03 15:32:13.787824 | orchestrator | 2025-06-03 15:32:13.787835 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-06-03 15:32:13.787859 | orchestrator | Tuesday 03 June 2025 15:30:35 +0000 (0:00:00.104) 0:00:54.937 ********** 2025-06-03 15:32:13.787871 | orchestrator | 2025-06-03 15:32:13.787882 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-06-03 15:32:13.787892 | orchestrator | Tuesday 03 June 2025 15:30:35 +0000 (0:00:00.131) 0:00:55.069 ********** 2025-06-03 15:32:13.787903 | orchestrator | 2025-06-03 15:32:13.787914 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2025-06-03 15:32:13.787925 | orchestrator | Tuesday 03 June 2025 15:30:35 +0000 (0:00:00.109) 0:00:55.179 ********** 2025-06-03 15:32:13.787936 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:32:13.787946 | orchestrator | changed: [testbed-manager] 2025-06-03 15:32:13.787957 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:32:13.787968 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:32:13.787978 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:32:13.787989 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:32:13.788000 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:32:13.788011 | orchestrator | 2025-06-03 15:32:13.788022 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2025-06-03 15:32:13.788033 | orchestrator | Tuesday 03 June 2025 15:31:17 +0000 (0:00:42.194) 0:01:37.373 ********** 2025-06-03 15:32:13.788044 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:32:13.788055 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:32:13.788065 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:32:13.788076 | orchestrator | changed: [testbed-manager] 2025-06-03 15:32:13.788087 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:32:13.788098 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:32:13.788108 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:32:13.788118 | orchestrator | 2025-06-03 15:32:13.788127 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2025-06-03 15:32:13.788137 | orchestrator | Tuesday 03 June 2025 15:32:03 +0000 (0:00:45.623) 0:02:22.996 ********** 2025-06-03 15:32:13.788146 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:32:13.788156 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:32:13.788166 | orchestrator | ok: [testbed-manager] 2025-06-03 15:32:13.788175 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:32:13.788185 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:32:13.788194 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:32:13.788204 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:32:13.788213 | orchestrator | 2025-06-03 15:32:13.788223 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2025-06-03 15:32:13.788233 | orchestrator | Tuesday 03 June 2025 15:32:05 +0000 (0:00:02.403) 0:02:25.400 ********** 2025-06-03 15:32:13.788242 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:32:13.788252 | orchestrator | changed: [testbed-manager] 2025-06-03 15:32:13.788262 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:32:13.788272 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:32:13.788281 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:32:13.788291 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:32:13.788306 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:32:13.788316 | orchestrator | 2025-06-03 15:32:13.788325 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-03 15:32:13.788336 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-06-03 15:32:13.788346 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-06-03 15:32:13.788368 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-06-03 15:32:13.788378 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-06-03 15:32:13.788388 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-06-03 15:32:13.788398 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-06-03 15:32:13.788408 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-06-03 15:32:13.788418 | orchestrator | 2025-06-03 15:32:13.788427 | orchestrator | 2025-06-03 15:32:13.788437 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-03 15:32:13.788446 | orchestrator | Tuesday 03 June 2025 15:32:10 +0000 (0:00:04.777) 0:02:30.178 ********** 2025-06-03 15:32:13.788456 | orchestrator | =============================================================================== 2025-06-03 15:32:13.788466 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 45.62s 2025-06-03 15:32:13.788475 | orchestrator | common : Restart fluentd container ------------------------------------- 42.19s 2025-06-03 15:32:13.788485 | orchestrator | common : Copying over config.json files for services -------------------- 6.95s 2025-06-03 15:32:13.788494 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 5.70s 2025-06-03 15:32:13.788504 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 5.19s 2025-06-03 15:32:13.788513 | orchestrator | common : Restart cron container ----------------------------------------- 4.78s 2025-06-03 15:32:13.788523 | orchestrator | common : Ensuring config directories exist ------------------------------ 4.44s 2025-06-03 15:32:13.788533 | orchestrator | common : Check common containers ---------------------------------------- 3.20s 2025-06-03 15:32:13.788542 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 3.05s 2025-06-03 15:32:13.788552 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 2.95s 2025-06-03 15:32:13.788561 | orchestrator | common : Copying over cron logrotate config file ------------------------ 2.58s 2025-06-03 15:32:13.788571 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 2.54s 2025-06-03 15:32:13.788580 | orchestrator | common : Ensuring config directories have correct owner and permission --- 2.52s 2025-06-03 15:32:13.788590 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.40s 2025-06-03 15:32:13.788599 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 2.17s 2025-06-03 15:32:13.788608 | orchestrator | common : Creating log volume -------------------------------------------- 1.97s 2025-06-03 15:32:13.788663 | orchestrator | common : Link kolla_logs volume to /var/log/kolla ----------------------- 1.41s 2025-06-03 15:32:13.788681 | orchestrator | common : Restart systemd-tmpfiles --------------------------------------- 1.38s 2025-06-03 15:32:13.788698 | orchestrator | common : include_tasks -------------------------------------------------- 1.31s 2025-06-03 15:32:13.788714 | orchestrator | common : Find custom fluentd input config files ------------------------- 1.24s 2025-06-03 15:32:13.788729 | orchestrator | 2025-06-03 15:32:13 | INFO  | Task 2b42b3aa-4396-4a89-a666-075c05fdd1c6 is in state STARTED 2025-06-03 15:32:13.788739 | orchestrator | 2025-06-03 15:32:13 | INFO  | Task 24d8f5fe-678b-49e9-84e6-088e60df21c4 is in state STARTED 2025-06-03 15:32:13.788749 | orchestrator | 2025-06-03 15:32:13 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:32:16.821393 | orchestrator | 2025-06-03 15:32:16 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:32:16.822309 | orchestrator | 2025-06-03 15:32:16 | INFO  | Task aa9d758e-4c93-4d11-a652-a4d1eba549e0 is in state STARTED 2025-06-03 15:32:16.822663 | orchestrator | 2025-06-03 15:32:16 | INFO  | Task a9ef547f-fcf5-4839-9375-f5126546143c is in state STARTED 2025-06-03 15:32:16.823135 | orchestrator | 2025-06-03 15:32:16 | INFO  | Task a5a20b96-84cd-4464-8800-d7d7be04cad5 is in state STARTED 2025-06-03 15:32:16.823833 | orchestrator | 2025-06-03 15:32:16 | INFO  | Task 2b42b3aa-4396-4a89-a666-075c05fdd1c6 is in state STARTED 2025-06-03 15:32:16.824168 | orchestrator | 2025-06-03 15:32:16 | INFO  | Task 24d8f5fe-678b-49e9-84e6-088e60df21c4 is in state STARTED 2025-06-03 15:32:16.824234 | orchestrator | 2025-06-03 15:32:16 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:32:19.842727 | orchestrator | 2025-06-03 15:32:19 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:32:19.842828 | orchestrator | 2025-06-03 15:32:19 | INFO  | Task aa9d758e-4c93-4d11-a652-a4d1eba549e0 is in state STARTED 2025-06-03 15:32:19.842910 | orchestrator | 2025-06-03 15:32:19 | INFO  | Task a9ef547f-fcf5-4839-9375-f5126546143c is in state STARTED 2025-06-03 15:32:19.843341 | orchestrator | 2025-06-03 15:32:19 | INFO  | Task a5a20b96-84cd-4464-8800-d7d7be04cad5 is in state STARTED 2025-06-03 15:32:19.844079 | orchestrator | 2025-06-03 15:32:19 | INFO  | Task 2b42b3aa-4396-4a89-a666-075c05fdd1c6 is in state STARTED 2025-06-03 15:32:19.844731 | orchestrator | 2025-06-03 15:32:19 | INFO  | Task 24d8f5fe-678b-49e9-84e6-088e60df21c4 is in state STARTED 2025-06-03 15:32:19.844752 | orchestrator | 2025-06-03 15:32:19 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:32:22.878388 | orchestrator | 2025-06-03 15:32:22 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:32:22.880085 | orchestrator | 2025-06-03 15:32:22 | INFO  | Task aa9d758e-4c93-4d11-a652-a4d1eba549e0 is in state STARTED 2025-06-03 15:32:22.882318 | orchestrator | 2025-06-03 15:32:22 | INFO  | Task a9ef547f-fcf5-4839-9375-f5126546143c is in state STARTED 2025-06-03 15:32:22.885913 | orchestrator | 2025-06-03 15:32:22 | INFO  | Task a5a20b96-84cd-4464-8800-d7d7be04cad5 is in state STARTED 2025-06-03 15:32:22.887003 | orchestrator | 2025-06-03 15:32:22 | INFO  | Task 2b42b3aa-4396-4a89-a666-075c05fdd1c6 is in state STARTED 2025-06-03 15:32:22.887640 | orchestrator | 2025-06-03 15:32:22 | INFO  | Task 24d8f5fe-678b-49e9-84e6-088e60df21c4 is in state STARTED 2025-06-03 15:32:22.887808 | orchestrator | 2025-06-03 15:32:22 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:32:25.939394 | orchestrator | 2025-06-03 15:32:25 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:32:25.942590 | orchestrator | 2025-06-03 15:32:25 | INFO  | Task aa9d758e-4c93-4d11-a652-a4d1eba549e0 is in state STARTED 2025-06-03 15:32:25.946145 | orchestrator | 2025-06-03 15:32:25 | INFO  | Task a9ef547f-fcf5-4839-9375-f5126546143c is in state STARTED 2025-06-03 15:32:25.951296 | orchestrator | 2025-06-03 15:32:25 | INFO  | Task a5a20b96-84cd-4464-8800-d7d7be04cad5 is in state STARTED 2025-06-03 15:32:25.951878 | orchestrator | 2025-06-03 15:32:25 | INFO  | Task 2b42b3aa-4396-4a89-a666-075c05fdd1c6 is in state STARTED 2025-06-03 15:32:25.953110 | orchestrator | 2025-06-03 15:32:25 | INFO  | Task 24d8f5fe-678b-49e9-84e6-088e60df21c4 is in state STARTED 2025-06-03 15:32:25.953150 | orchestrator | 2025-06-03 15:32:25 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:32:29.003944 | orchestrator | 2025-06-03 15:32:29 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:32:29.018098 | orchestrator | 2025-06-03 15:32:29 | INFO  | Task aa9d758e-4c93-4d11-a652-a4d1eba549e0 is in state STARTED 2025-06-03 15:32:29.018554 | orchestrator | 2025-06-03 15:32:29 | INFO  | Task a9ef547f-fcf5-4839-9375-f5126546143c is in state STARTED 2025-06-03 15:32:29.028244 | orchestrator | 2025-06-03 15:32:29 | INFO  | Task a5a20b96-84cd-4464-8800-d7d7be04cad5 is in state STARTED 2025-06-03 15:32:29.029174 | orchestrator | 2025-06-03 15:32:29 | INFO  | Task 2b42b3aa-4396-4a89-a666-075c05fdd1c6 is in state STARTED 2025-06-03 15:32:29.030733 | orchestrator | 2025-06-03 15:32:29 | INFO  | Task 24d8f5fe-678b-49e9-84e6-088e60df21c4 is in state STARTED 2025-06-03 15:32:29.030793 | orchestrator | 2025-06-03 15:32:29 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:32:32.063618 | orchestrator | 2025-06-03 15:32:32 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:32:32.063920 | orchestrator | 2025-06-03 15:32:32 | INFO  | Task aa9d758e-4c93-4d11-a652-a4d1eba549e0 is in state STARTED 2025-06-03 15:32:32.064439 | orchestrator | 2025-06-03 15:32:32 | INFO  | Task a9ef547f-fcf5-4839-9375-f5126546143c is in state STARTED 2025-06-03 15:32:32.070089 | orchestrator | 2025-06-03 15:32:32 | INFO  | Task a5a20b96-84cd-4464-8800-d7d7be04cad5 is in state STARTED 2025-06-03 15:32:32.070153 | orchestrator | 2025-06-03 15:32:32 | INFO  | Task 2b42b3aa-4396-4a89-a666-075c05fdd1c6 is in state STARTED 2025-06-03 15:32:32.074175 | orchestrator | 2025-06-03 15:32:32 | INFO  | Task 24d8f5fe-678b-49e9-84e6-088e60df21c4 is in state STARTED 2025-06-03 15:32:32.074219 | orchestrator | 2025-06-03 15:32:32 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:32:35.099003 | orchestrator | 2025-06-03 15:32:35 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:32:35.099524 | orchestrator | 2025-06-03 15:32:35 | INFO  | Task b2488e9b-0278-4b24-bce6-4fbe674b9626 is in state STARTED 2025-06-03 15:32:35.099564 | orchestrator | 2025-06-03 15:32:35 | INFO  | Task aa9d758e-4c93-4d11-a652-a4d1eba549e0 is in state STARTED 2025-06-03 15:32:35.101297 | orchestrator | 2025-06-03 15:32:35 | INFO  | Task a9ef547f-fcf5-4839-9375-f5126546143c is in state STARTED 2025-06-03 15:32:35.101766 | orchestrator | 2025-06-03 15:32:35 | INFO  | Task a5a20b96-84cd-4464-8800-d7d7be04cad5 is in state STARTED 2025-06-03 15:32:35.102224 | orchestrator | 2025-06-03 15:32:35 | INFO  | Task 2b42b3aa-4396-4a89-a666-075c05fdd1c6 is in state STARTED 2025-06-03 15:32:35.102594 | orchestrator | 2025-06-03 15:32:35 | INFO  | Task 24d8f5fe-678b-49e9-84e6-088e60df21c4 is in state SUCCESS 2025-06-03 15:32:35.102692 | orchestrator | 2025-06-03 15:32:35 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:32:38.138314 | orchestrator | 2025-06-03 15:32:38 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:32:38.138743 | orchestrator | 2025-06-03 15:32:38 | INFO  | Task b2488e9b-0278-4b24-bce6-4fbe674b9626 is in state STARTED 2025-06-03 15:32:38.139238 | orchestrator | 2025-06-03 15:32:38 | INFO  | Task aa9d758e-4c93-4d11-a652-a4d1eba549e0 is in state STARTED 2025-06-03 15:32:38.142433 | orchestrator | 2025-06-03 15:32:38 | INFO  | Task a9ef547f-fcf5-4839-9375-f5126546143c is in state STARTED 2025-06-03 15:32:38.143009 | orchestrator | 2025-06-03 15:32:38 | INFO  | Task a5a20b96-84cd-4464-8800-d7d7be04cad5 is in state STARTED 2025-06-03 15:32:38.143574 | orchestrator | 2025-06-03 15:32:38 | INFO  | Task 2b42b3aa-4396-4a89-a666-075c05fdd1c6 is in state STARTED 2025-06-03 15:32:38.143695 | orchestrator | 2025-06-03 15:32:38 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:32:41.171047 | orchestrator | 2025-06-03 15:32:41 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:32:41.171196 | orchestrator | 2025-06-03 15:32:41 | INFO  | Task b2488e9b-0278-4b24-bce6-4fbe674b9626 is in state STARTED 2025-06-03 15:32:41.171960 | orchestrator | 2025-06-03 15:32:41 | INFO  | Task aa9d758e-4c93-4d11-a652-a4d1eba549e0 is in state STARTED 2025-06-03 15:32:41.172366 | orchestrator | 2025-06-03 15:32:41 | INFO  | Task a9ef547f-fcf5-4839-9375-f5126546143c is in state STARTED 2025-06-03 15:32:41.174373 | orchestrator | 2025-06-03 15:32:41 | INFO  | Task a5a20b96-84cd-4464-8800-d7d7be04cad5 is in state STARTED 2025-06-03 15:32:41.174888 | orchestrator | 2025-06-03 15:32:41 | INFO  | Task 2b42b3aa-4396-4a89-a666-075c05fdd1c6 is in state STARTED 2025-06-03 15:32:41.174923 | orchestrator | 2025-06-03 15:32:41 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:32:44.207384 | orchestrator | 2025-06-03 15:32:44 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:32:44.208503 | orchestrator | 2025-06-03 15:32:44 | INFO  | Task b2488e9b-0278-4b24-bce6-4fbe674b9626 is in state STARTED 2025-06-03 15:32:44.209449 | orchestrator | 2025-06-03 15:32:44 | INFO  | Task aa9d758e-4c93-4d11-a652-a4d1eba549e0 is in state STARTED 2025-06-03 15:32:44.210511 | orchestrator | 2025-06-03 15:32:44 | INFO  | Task a9ef547f-fcf5-4839-9375-f5126546143c is in state STARTED 2025-06-03 15:32:44.211835 | orchestrator | 2025-06-03 15:32:44 | INFO  | Task a5a20b96-84cd-4464-8800-d7d7be04cad5 is in state SUCCESS 2025-06-03 15:32:44.212892 | orchestrator | 2025-06-03 15:32:44.212929 | orchestrator | 2025-06-03 15:32:44.212941 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-03 15:32:44.212954 | orchestrator | 2025-06-03 15:32:44.212965 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-03 15:32:44.212977 | orchestrator | Tuesday 03 June 2025 15:32:16 +0000 (0:00:00.273) 0:00:00.273 ********** 2025-06-03 15:32:44.212989 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:32:44.213001 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:32:44.213012 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:32:44.213023 | orchestrator | 2025-06-03 15:32:44.213035 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-03 15:32:44.213046 | orchestrator | Tuesday 03 June 2025 15:32:17 +0000 (0:00:00.421) 0:00:00.695 ********** 2025-06-03 15:32:44.213058 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2025-06-03 15:32:44.213069 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2025-06-03 15:32:44.213080 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2025-06-03 15:32:44.213091 | orchestrator | 2025-06-03 15:32:44.213102 | orchestrator | PLAY [Apply role memcached] **************************************************** 2025-06-03 15:32:44.213114 | orchestrator | 2025-06-03 15:32:44.213153 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2025-06-03 15:32:44.213174 | orchestrator | Tuesday 03 June 2025 15:32:18 +0000 (0:00:00.740) 0:00:01.435 ********** 2025-06-03 15:32:44.213192 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:32:44.213212 | orchestrator | 2025-06-03 15:32:44.213232 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2025-06-03 15:32:44.213250 | orchestrator | Tuesday 03 June 2025 15:32:18 +0000 (0:00:00.623) 0:00:02.059 ********** 2025-06-03 15:32:44.213269 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-06-03 15:32:44.213288 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-06-03 15:32:44.213304 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-06-03 15:32:44.213321 | orchestrator | 2025-06-03 15:32:44.213339 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2025-06-03 15:32:44.213358 | orchestrator | Tuesday 03 June 2025 15:32:19 +0000 (0:00:00.699) 0:00:02.759 ********** 2025-06-03 15:32:44.213409 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-06-03 15:32:44.213422 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-06-03 15:32:44.213433 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-06-03 15:32:44.213443 | orchestrator | 2025-06-03 15:32:44.213455 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2025-06-03 15:32:44.213466 | orchestrator | Tuesday 03 June 2025 15:32:21 +0000 (0:00:01.971) 0:00:04.730 ********** 2025-06-03 15:32:44.213477 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:32:44.213488 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:32:44.213500 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:32:44.213512 | orchestrator | 2025-06-03 15:32:44.213525 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2025-06-03 15:32:44.213538 | orchestrator | Tuesday 03 June 2025 15:32:23 +0000 (0:00:02.320) 0:00:07.051 ********** 2025-06-03 15:32:44.213550 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:32:44.213562 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:32:44.213575 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:32:44.213587 | orchestrator | 2025-06-03 15:32:44.213599 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-03 15:32:44.213613 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-03 15:32:44.213658 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-03 15:32:44.213673 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-03 15:32:44.213685 | orchestrator | 2025-06-03 15:32:44.213697 | orchestrator | 2025-06-03 15:32:44.213710 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-03 15:32:44.213722 | orchestrator | Tuesday 03 June 2025 15:32:32 +0000 (0:00:08.666) 0:00:15.717 ********** 2025-06-03 15:32:44.213734 | orchestrator | =============================================================================== 2025-06-03 15:32:44.213747 | orchestrator | memcached : Restart memcached container --------------------------------- 8.67s 2025-06-03 15:32:44.213759 | orchestrator | memcached : Check memcached container ----------------------------------- 2.32s 2025-06-03 15:32:44.213770 | orchestrator | memcached : Copying over config.json files for services ----------------- 1.97s 2025-06-03 15:32:44.213783 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.74s 2025-06-03 15:32:44.213795 | orchestrator | memcached : Ensuring config directories exist --------------------------- 0.70s 2025-06-03 15:32:44.213807 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.62s 2025-06-03 15:32:44.213819 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.42s 2025-06-03 15:32:44.213831 | orchestrator | 2025-06-03 15:32:44.213844 | orchestrator | 2025-06-03 15:32:44.213862 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-03 15:32:44.213882 | orchestrator | 2025-06-03 15:32:44.213903 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-03 15:32:44.213921 | orchestrator | Tuesday 03 June 2025 15:32:16 +0000 (0:00:00.272) 0:00:00.272 ********** 2025-06-03 15:32:44.213940 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:32:44.213959 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:32:44.213976 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:32:44.213996 | orchestrator | 2025-06-03 15:32:44.214072 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-03 15:32:44.214108 | orchestrator | Tuesday 03 June 2025 15:32:16 +0000 (0:00:00.425) 0:00:00.698 ********** 2025-06-03 15:32:44.214120 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2025-06-03 15:32:44.214131 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2025-06-03 15:32:44.214153 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2025-06-03 15:32:44.214164 | orchestrator | 2025-06-03 15:32:44.214175 | orchestrator | PLAY [Apply role redis] ******************************************************** 2025-06-03 15:32:44.214186 | orchestrator | 2025-06-03 15:32:44.214205 | orchestrator | TASK [redis : include_tasks] *************************************************** 2025-06-03 15:32:44.214223 | orchestrator | Tuesday 03 June 2025 15:32:17 +0000 (0:00:00.438) 0:00:01.136 ********** 2025-06-03 15:32:44.214241 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:32:44.214258 | orchestrator | 2025-06-03 15:32:44.214276 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2025-06-03 15:32:44.214291 | orchestrator | Tuesday 03 June 2025 15:32:18 +0000 (0:00:00.623) 0:00:01.760 ********** 2025-06-03 15:32:44.214320 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-03 15:32:44.214347 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-03 15:32:44.214368 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-03 15:32:44.214388 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-03 15:32:44.214409 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-03 15:32:44.214438 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-03 15:32:44.214460 | orchestrator | 2025-06-03 15:32:44.214471 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2025-06-03 15:32:44.214482 | orchestrator | Tuesday 03 June 2025 15:32:19 +0000 (0:00:01.218) 0:00:02.978 ********** 2025-06-03 15:32:44.214499 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-03 15:32:44.214512 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-03 15:32:44.214524 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-03 15:32:44.214535 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-03 15:32:44.214547 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-03 15:32:44.214573 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-03 15:32:44.214678 | orchestrator | 2025-06-03 15:32:44.214756 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2025-06-03 15:32:44.214767 | orchestrator | Tuesday 03 June 2025 15:32:22 +0000 (0:00:02.887) 0:00:05.866 ********** 2025-06-03 15:32:44.214779 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-03 15:32:44.214791 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-03 15:32:44.214812 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-03 15:32:44.214824 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-03 15:32:44.214836 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-03 15:32:44.214857 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-03 15:32:44.214868 | orchestrator | 2025-06-03 15:32:44.214888 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2025-06-03 15:32:44.214899 | orchestrator | Tuesday 03 June 2025 15:32:25 +0000 (0:00:03.753) 0:00:09.619 ********** 2025-06-03 15:32:44.214911 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-03 15:32:44.214927 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-03 15:32:44.214939 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-03 15:32:44.214951 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-03 15:32:44.214970 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-03 15:32:44.214999 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-03 15:32:44.215018 | orchestrator | 2025-06-03 15:32:44.215146 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-06-03 15:32:44.215168 | orchestrator | Tuesday 03 June 2025 15:32:28 +0000 (0:00:02.264) 0:00:11.884 ********** 2025-06-03 15:32:44.215187 | orchestrator | 2025-06-03 15:32:44.215199 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-06-03 15:32:44.215219 | orchestrator | Tuesday 03 June 2025 15:32:28 +0000 (0:00:00.155) 0:00:12.039 ********** 2025-06-03 15:32:44.215230 | orchestrator | 2025-06-03 15:32:44.215241 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-06-03 15:32:44.215252 | orchestrator | Tuesday 03 June 2025 15:32:28 +0000 (0:00:00.093) 0:00:12.133 ********** 2025-06-03 15:32:44.215263 | orchestrator | 2025-06-03 15:32:44.215274 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2025-06-03 15:32:44.215285 | orchestrator | Tuesday 03 June 2025 15:32:28 +0000 (0:00:00.074) 0:00:12.208 ********** 2025-06-03 15:32:44.215295 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:32:44.215312 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:32:44.215332 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:32:44.215350 | orchestrator | 2025-06-03 15:32:44.215368 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2025-06-03 15:32:44.215381 | orchestrator | Tuesday 03 June 2025 15:32:37 +0000 (0:00:09.283) 0:00:21.491 ********** 2025-06-03 15:32:44.215392 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:32:44.215403 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:32:44.215414 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:32:44.215425 | orchestrator | 2025-06-03 15:32:44.215442 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-03 15:32:44.215454 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-03 15:32:44.215466 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-03 15:32:44.215477 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-03 15:32:44.215488 | orchestrator | 2025-06-03 15:32:44.215498 | orchestrator | 2025-06-03 15:32:44.215509 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-03 15:32:44.215520 | orchestrator | Tuesday 03 June 2025 15:32:42 +0000 (0:00:04.364) 0:00:25.856 ********** 2025-06-03 15:32:44.215531 | orchestrator | =============================================================================== 2025-06-03 15:32:44.215542 | orchestrator | redis : Restart redis container ----------------------------------------- 9.28s 2025-06-03 15:32:44.215557 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 4.36s 2025-06-03 15:32:44.215576 | orchestrator | redis : Copying over redis config files --------------------------------- 3.75s 2025-06-03 15:32:44.215595 | orchestrator | redis : Copying over default config.json files -------------------------- 2.89s 2025-06-03 15:32:44.215613 | orchestrator | redis : Check redis containers ------------------------------------------ 2.26s 2025-06-03 15:32:44.215671 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.22s 2025-06-03 15:32:44.215690 | orchestrator | redis : include_tasks --------------------------------------------------- 0.62s 2025-06-03 15:32:44.215707 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.44s 2025-06-03 15:32:44.215724 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.43s 2025-06-03 15:32:44.215742 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.32s 2025-06-03 15:32:44.215916 | orchestrator | 2025-06-03 15:32:44 | INFO  | Task 2b42b3aa-4396-4a89-a666-075c05fdd1c6 is in state STARTED 2025-06-03 15:32:44.215938 | orchestrator | 2025-06-03 15:32:44 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:32:47.257980 | orchestrator | 2025-06-03 15:32:47 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:32:47.261136 | orchestrator | 2025-06-03 15:32:47 | INFO  | Task b2488e9b-0278-4b24-bce6-4fbe674b9626 is in state STARTED 2025-06-03 15:32:47.261947 | orchestrator | 2025-06-03 15:32:47 | INFO  | Task aa9d758e-4c93-4d11-a652-a4d1eba549e0 is in state STARTED 2025-06-03 15:32:47.263059 | orchestrator | 2025-06-03 15:32:47 | INFO  | Task a9ef547f-fcf5-4839-9375-f5126546143c is in state STARTED 2025-06-03 15:32:47.264264 | orchestrator | 2025-06-03 15:32:47 | INFO  | Task 2b42b3aa-4396-4a89-a666-075c05fdd1c6 is in state STARTED 2025-06-03 15:32:47.264289 | orchestrator | 2025-06-03 15:32:47 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:32:50.290228 | orchestrator | 2025-06-03 15:32:50 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:32:50.299361 | orchestrator | 2025-06-03 15:32:50 | INFO  | Task b2488e9b-0278-4b24-bce6-4fbe674b9626 is in state STARTED 2025-06-03 15:32:50.299530 | orchestrator | 2025-06-03 15:32:50 | INFO  | Task aa9d758e-4c93-4d11-a652-a4d1eba549e0 is in state STARTED 2025-06-03 15:32:50.306237 | orchestrator | 2025-06-03 15:32:50 | INFO  | Task a9ef547f-fcf5-4839-9375-f5126546143c is in state STARTED 2025-06-03 15:32:50.306306 | orchestrator | 2025-06-03 15:32:50 | INFO  | Task 2b42b3aa-4396-4a89-a666-075c05fdd1c6 is in state STARTED 2025-06-03 15:32:50.306317 | orchestrator | 2025-06-03 15:32:50 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:32:53.350806 | orchestrator | 2025-06-03 15:32:53 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:32:53.352367 | orchestrator | 2025-06-03 15:32:53 | INFO  | Task b2488e9b-0278-4b24-bce6-4fbe674b9626 is in state STARTED 2025-06-03 15:32:53.352776 | orchestrator | 2025-06-03 15:32:53 | INFO  | Task aa9d758e-4c93-4d11-a652-a4d1eba549e0 is in state STARTED 2025-06-03 15:32:53.359252 | orchestrator | 2025-06-03 15:32:53 | INFO  | Task a9ef547f-fcf5-4839-9375-f5126546143c is in state STARTED 2025-06-03 15:32:53.359335 | orchestrator | 2025-06-03 15:32:53 | INFO  | Task 2b42b3aa-4396-4a89-a666-075c05fdd1c6 is in state STARTED 2025-06-03 15:32:53.359347 | orchestrator | 2025-06-03 15:32:53 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:32:56.405102 | orchestrator | 2025-06-03 15:32:56 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:32:56.405323 | orchestrator | 2025-06-03 15:32:56 | INFO  | Task b2488e9b-0278-4b24-bce6-4fbe674b9626 is in state STARTED 2025-06-03 15:32:56.406145 | orchestrator | 2025-06-03 15:32:56 | INFO  | Task aa9d758e-4c93-4d11-a652-a4d1eba549e0 is in state STARTED 2025-06-03 15:32:56.406914 | orchestrator | 2025-06-03 15:32:56 | INFO  | Task a9ef547f-fcf5-4839-9375-f5126546143c is in state STARTED 2025-06-03 15:32:56.407313 | orchestrator | 2025-06-03 15:32:56 | INFO  | Task 2b42b3aa-4396-4a89-a666-075c05fdd1c6 is in state STARTED 2025-06-03 15:32:56.407379 | orchestrator | 2025-06-03 15:32:56 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:32:59.437314 | orchestrator | 2025-06-03 15:32:59 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:32:59.438946 | orchestrator | 2025-06-03 15:32:59 | INFO  | Task b2488e9b-0278-4b24-bce6-4fbe674b9626 is in state STARTED 2025-06-03 15:32:59.438973 | orchestrator | 2025-06-03 15:32:59 | INFO  | Task aa9d758e-4c93-4d11-a652-a4d1eba549e0 is in state STARTED 2025-06-03 15:32:59.440551 | orchestrator | 2025-06-03 15:32:59 | INFO  | Task a9ef547f-fcf5-4839-9375-f5126546143c is in state STARTED 2025-06-03 15:32:59.441208 | orchestrator | 2025-06-03 15:32:59 | INFO  | Task 2b42b3aa-4396-4a89-a666-075c05fdd1c6 is in state STARTED 2025-06-03 15:32:59.441273 | orchestrator | 2025-06-03 15:32:59 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:33:02.482570 | orchestrator | 2025-06-03 15:33:02 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:33:02.482711 | orchestrator | 2025-06-03 15:33:02 | INFO  | Task b2488e9b-0278-4b24-bce6-4fbe674b9626 is in state STARTED 2025-06-03 15:33:02.482723 | orchestrator | 2025-06-03 15:33:02 | INFO  | Task aa9d758e-4c93-4d11-a652-a4d1eba549e0 is in state STARTED 2025-06-03 15:33:02.482731 | orchestrator | 2025-06-03 15:33:02 | INFO  | Task a9ef547f-fcf5-4839-9375-f5126546143c is in state STARTED 2025-06-03 15:33:02.482738 | orchestrator | 2025-06-03 15:33:02 | INFO  | Task 2b42b3aa-4396-4a89-a666-075c05fdd1c6 is in state STARTED 2025-06-03 15:33:02.482746 | orchestrator | 2025-06-03 15:33:02 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:33:05.512879 | orchestrator | 2025-06-03 15:33:05 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:33:05.514255 | orchestrator | 2025-06-03 15:33:05 | INFO  | Task b2488e9b-0278-4b24-bce6-4fbe674b9626 is in state STARTED 2025-06-03 15:33:05.516398 | orchestrator | 2025-06-03 15:33:05 | INFO  | Task aa9d758e-4c93-4d11-a652-a4d1eba549e0 is in state STARTED 2025-06-03 15:33:05.517598 | orchestrator | 2025-06-03 15:33:05 | INFO  | Task a9ef547f-fcf5-4839-9375-f5126546143c is in state STARTED 2025-06-03 15:33:05.518543 | orchestrator | 2025-06-03 15:33:05 | INFO  | Task 2b42b3aa-4396-4a89-a666-075c05fdd1c6 is in state STARTED 2025-06-03 15:33:05.518656 | orchestrator | 2025-06-03 15:33:05 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:33:08.542445 | orchestrator | 2025-06-03 15:33:08 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:33:08.542572 | orchestrator | 2025-06-03 15:33:08 | INFO  | Task b2488e9b-0278-4b24-bce6-4fbe674b9626 is in state STARTED 2025-06-03 15:33:08.543159 | orchestrator | 2025-06-03 15:33:08 | INFO  | Task aa9d758e-4c93-4d11-a652-a4d1eba549e0 is in state STARTED 2025-06-03 15:33:08.546559 | orchestrator | 2025-06-03 15:33:08 | INFO  | Task a9ef547f-fcf5-4839-9375-f5126546143c is in state STARTED 2025-06-03 15:33:08.547138 | orchestrator | 2025-06-03 15:33:08 | INFO  | Task 2b42b3aa-4396-4a89-a666-075c05fdd1c6 is in state STARTED 2025-06-03 15:33:08.547180 | orchestrator | 2025-06-03 15:33:08 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:33:11.576279 | orchestrator | 2025-06-03 15:33:11 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:33:11.577327 | orchestrator | 2025-06-03 15:33:11 | INFO  | Task b2488e9b-0278-4b24-bce6-4fbe674b9626 is in state STARTED 2025-06-03 15:33:11.580098 | orchestrator | 2025-06-03 15:33:11 | INFO  | Task aa9d758e-4c93-4d11-a652-a4d1eba549e0 is in state STARTED 2025-06-03 15:33:11.580938 | orchestrator | 2025-06-03 15:33:11 | INFO  | Task a9ef547f-fcf5-4839-9375-f5126546143c is in state STARTED 2025-06-03 15:33:11.582077 | orchestrator | 2025-06-03 15:33:11 | INFO  | Task 2b42b3aa-4396-4a89-a666-075c05fdd1c6 is in state STARTED 2025-06-03 15:33:11.582117 | orchestrator | 2025-06-03 15:33:11 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:33:14.620820 | orchestrator | 2025-06-03 15:33:14 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:33:14.620949 | orchestrator | 2025-06-03 15:33:14 | INFO  | Task b2488e9b-0278-4b24-bce6-4fbe674b9626 is in state STARTED 2025-06-03 15:33:14.620966 | orchestrator | 2025-06-03 15:33:14 | INFO  | Task aa9d758e-4c93-4d11-a652-a4d1eba549e0 is in state STARTED 2025-06-03 15:33:14.621428 | orchestrator | 2025-06-03 15:33:14 | INFO  | Task a9ef547f-fcf5-4839-9375-f5126546143c is in state STARTED 2025-06-03 15:33:14.622256 | orchestrator | 2025-06-03 15:33:14 | INFO  | Task 2b42b3aa-4396-4a89-a666-075c05fdd1c6 is in state STARTED 2025-06-03 15:33:14.625029 | orchestrator | 2025-06-03 15:33:14 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:33:17.662280 | orchestrator | 2025-06-03 15:33:17 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:33:17.662552 | orchestrator | 2025-06-03 15:33:17 | INFO  | Task b2488e9b-0278-4b24-bce6-4fbe674b9626 is in state STARTED 2025-06-03 15:33:17.669408 | orchestrator | 2025-06-03 15:33:17 | INFO  | Task aa9d758e-4c93-4d11-a652-a4d1eba549e0 is in state STARTED 2025-06-03 15:33:17.669482 | orchestrator | 2025-06-03 15:33:17 | INFO  | Task a9ef547f-fcf5-4839-9375-f5126546143c is in state STARTED 2025-06-03 15:33:17.669497 | orchestrator | 2025-06-03 15:33:17 | INFO  | Task 2b42b3aa-4396-4a89-a666-075c05fdd1c6 is in state STARTED 2025-06-03 15:33:17.669509 | orchestrator | 2025-06-03 15:33:17 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:33:20.715143 | orchestrator | 2025-06-03 15:33:20 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:33:20.715581 | orchestrator | 2025-06-03 15:33:20 | INFO  | Task b2488e9b-0278-4b24-bce6-4fbe674b9626 is in state STARTED 2025-06-03 15:33:20.717115 | orchestrator | 2025-06-03 15:33:20 | INFO  | Task aa9d758e-4c93-4d11-a652-a4d1eba549e0 is in state STARTED 2025-06-03 15:33:20.718632 | orchestrator | 2025-06-03 15:33:20 | INFO  | Task a9ef547f-fcf5-4839-9375-f5126546143c is in state STARTED 2025-06-03 15:33:20.720308 | orchestrator | 2025-06-03 15:33:20 | INFO  | Task 2b42b3aa-4396-4a89-a666-075c05fdd1c6 is in state STARTED 2025-06-03 15:33:20.720546 | orchestrator | 2025-06-03 15:33:20 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:33:23.757488 | orchestrator | 2025-06-03 15:33:23 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:33:23.757574 | orchestrator | 2025-06-03 15:33:23 | INFO  | Task b2488e9b-0278-4b24-bce6-4fbe674b9626 is in state STARTED 2025-06-03 15:33:23.757924 | orchestrator | 2025-06-03 15:33:23 | INFO  | Task aa9d758e-4c93-4d11-a652-a4d1eba549e0 is in state STARTED 2025-06-03 15:33:23.758943 | orchestrator | 2025-06-03 15:33:23 | INFO  | Task a9ef547f-fcf5-4839-9375-f5126546143c is in state STARTED 2025-06-03 15:33:23.759715 | orchestrator | 2025-06-03 15:33:23 | INFO  | Task 2b42b3aa-4396-4a89-a666-075c05fdd1c6 is in state STARTED 2025-06-03 15:33:23.760127 | orchestrator | 2025-06-03 15:33:23 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:33:26.796745 | orchestrator | 2025-06-03 15:33:26 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:33:26.797458 | orchestrator | 2025-06-03 15:33:26 | INFO  | Task b2488e9b-0278-4b24-bce6-4fbe674b9626 is in state STARTED 2025-06-03 15:33:26.798748 | orchestrator | 2025-06-03 15:33:26 | INFO  | Task aa9d758e-4c93-4d11-a652-a4d1eba549e0 is in state STARTED 2025-06-03 15:33:26.800703 | orchestrator | 2025-06-03 15:33:26 | INFO  | Task a9ef547f-fcf5-4839-9375-f5126546143c is in state SUCCESS 2025-06-03 15:33:26.800835 | orchestrator | 2025-06-03 15:33:26.802475 | orchestrator | 2025-06-03 15:33:26.802526 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-03 15:33:26.802538 | orchestrator | 2025-06-03 15:33:26.802547 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-03 15:33:26.802556 | orchestrator | Tuesday 03 June 2025 15:32:17 +0000 (0:00:00.441) 0:00:00.441 ********** 2025-06-03 15:33:26.802565 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:33:26.802575 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:33:26.802584 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:33:26.802593 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:33:26.802601 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:33:26.802610 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:33:26.802619 | orchestrator | 2025-06-03 15:33:26.802628 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-03 15:33:26.802747 | orchestrator | Tuesday 03 June 2025 15:32:18 +0000 (0:00:00.781) 0:00:01.222 ********** 2025-06-03 15:33:26.802763 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-06-03 15:33:26.802772 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-06-03 15:33:26.802781 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-06-03 15:33:26.802790 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-06-03 15:33:26.802812 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-06-03 15:33:26.802822 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-06-03 15:33:26.802830 | orchestrator | 2025-06-03 15:33:26.802839 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2025-06-03 15:33:26.802847 | orchestrator | 2025-06-03 15:33:26.802856 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2025-06-03 15:33:26.802865 | orchestrator | Tuesday 03 June 2025 15:32:18 +0000 (0:00:00.898) 0:00:02.121 ********** 2025-06-03 15:33:26.802875 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:33:26.802885 | orchestrator | 2025-06-03 15:33:26.802894 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-06-03 15:33:26.802902 | orchestrator | Tuesday 03 June 2025 15:32:20 +0000 (0:00:01.565) 0:00:03.687 ********** 2025-06-03 15:33:26.802911 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-06-03 15:33:26.802921 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-06-03 15:33:26.802929 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-06-03 15:33:26.802938 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-06-03 15:33:26.802946 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-06-03 15:33:26.802955 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-06-03 15:33:26.802963 | orchestrator | 2025-06-03 15:33:26.802972 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-06-03 15:33:26.802981 | orchestrator | Tuesday 03 June 2025 15:32:22 +0000 (0:00:01.713) 0:00:05.400 ********** 2025-06-03 15:33:26.802990 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-06-03 15:33:26.802998 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-06-03 15:33:26.803007 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-06-03 15:33:26.803016 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-06-03 15:33:26.803041 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-06-03 15:33:26.803051 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-06-03 15:33:26.803059 | orchestrator | 2025-06-03 15:33:26.803068 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-06-03 15:33:26.803076 | orchestrator | Tuesday 03 June 2025 15:32:24 +0000 (0:00:02.081) 0:00:07.481 ********** 2025-06-03 15:33:26.803085 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2025-06-03 15:33:26.803093 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:33:26.803103 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2025-06-03 15:33:26.803111 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:33:26.803120 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2025-06-03 15:33:26.803128 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:33:26.803137 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2025-06-03 15:33:26.803145 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:33:26.803154 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2025-06-03 15:33:26.803162 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:33:26.803171 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2025-06-03 15:33:26.803180 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:33:26.803188 | orchestrator | 2025-06-03 15:33:26.803197 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2025-06-03 15:33:26.803205 | orchestrator | Tuesday 03 June 2025 15:32:26 +0000 (0:00:02.692) 0:00:10.174 ********** 2025-06-03 15:33:26.803214 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:33:26.803223 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:33:26.803231 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:33:26.803244 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:33:26.803259 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:33:26.803274 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:33:26.803288 | orchestrator | 2025-06-03 15:33:26.803303 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2025-06-03 15:33:26.803317 | orchestrator | Tuesday 03 June 2025 15:32:28 +0000 (0:00:01.348) 0:00:11.523 ********** 2025-06-03 15:33:26.803349 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-03 15:33:26.803368 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-03 15:33:26.803378 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-03 15:33:26.803395 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-03 15:33:26.803405 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-03 15:33:26.803414 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-03 15:33:26.803429 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-03 15:33:26.803443 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-03 15:33:26.803458 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-03 15:33:26.803467 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-03 15:33:26.803476 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-03 15:33:26.803491 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-03 15:33:26.803500 | orchestrator | 2025-06-03 15:33:26.803509 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2025-06-03 15:33:26.803518 | orchestrator | Tuesday 03 June 2025 15:32:30 +0000 (0:00:02.547) 0:00:14.070 ********** 2025-06-03 15:33:26.803527 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-03 15:33:26.803541 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-03 15:33:26.803555 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-03 15:33:26.803564 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-03 15:33:26.803574 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-03 15:33:26.803597 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-03 15:33:26.803611 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-03 15:33:26.803625 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-03 15:33:26.803634 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-03 15:33:26.803672 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-03 15:33:26.803681 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-03 15:33:26.803700 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-03 15:33:26.803715 | orchestrator | 2025-06-03 15:33:26.803724 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2025-06-03 15:33:26.803733 | orchestrator | Tuesday 03 June 2025 15:32:34 +0000 (0:00:03.544) 0:00:17.615 ********** 2025-06-03 15:33:26.803745 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:33:26.803760 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:33:26.803782 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:33:26.803796 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:33:26.803810 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:33:26.803824 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:33:26.803839 | orchestrator | 2025-06-03 15:33:26.803854 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2025-06-03 15:33:26.803869 | orchestrator | Tuesday 03 June 2025 15:32:35 +0000 (0:00:01.288) 0:00:18.903 ********** 2025-06-03 15:33:26.803880 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-03 15:33:26.803890 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-03 15:33:26.803899 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-03 15:33:26.803915 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-03 15:33:26.803932 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-03 15:33:26.803957 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-03 15:33:26.803966 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-03 15:33:26.803976 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-03 15:33:26.803985 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-03 15:33:26.803993 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-03 15:33:26.804009 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-03 15:33:26.804029 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-03 15:33:26.804039 | orchestrator | 2025-06-03 15:33:26.804048 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-06-03 15:33:26.804056 | orchestrator | Tuesday 03 June 2025 15:32:39 +0000 (0:00:03.969) 0:00:22.873 ********** 2025-06-03 15:33:26.804065 | orchestrator | 2025-06-03 15:33:26.804074 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-06-03 15:33:26.804082 | orchestrator | Tuesday 03 June 2025 15:32:39 +0000 (0:00:00.262) 0:00:23.135 ********** 2025-06-03 15:33:26.804091 | orchestrator | 2025-06-03 15:33:26.804099 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-06-03 15:33:26.804108 | orchestrator | Tuesday 03 June 2025 15:32:40 +0000 (0:00:00.151) 0:00:23.287 ********** 2025-06-03 15:33:26.804116 | orchestrator | 2025-06-03 15:33:26.804125 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-06-03 15:33:26.804133 | orchestrator | Tuesday 03 June 2025 15:32:40 +0000 (0:00:00.248) 0:00:23.535 ********** 2025-06-03 15:33:26.804142 | orchestrator | 2025-06-03 15:33:26.804150 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-06-03 15:33:26.804159 | orchestrator | Tuesday 03 June 2025 15:32:40 +0000 (0:00:00.396) 0:00:23.931 ********** 2025-06-03 15:33:26.804167 | orchestrator | 2025-06-03 15:33:26.804176 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-06-03 15:33:26.804184 | orchestrator | Tuesday 03 June 2025 15:32:41 +0000 (0:00:00.371) 0:00:24.303 ********** 2025-06-03 15:33:26.804193 | orchestrator | 2025-06-03 15:33:26.804201 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2025-06-03 15:33:26.804210 | orchestrator | Tuesday 03 June 2025 15:32:41 +0000 (0:00:00.291) 0:00:24.594 ********** 2025-06-03 15:33:26.804218 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:33:26.804227 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:33:26.804235 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:33:26.804244 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:33:26.804252 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:33:26.804261 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:33:26.804269 | orchestrator | 2025-06-03 15:33:26.804278 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2025-06-03 15:33:26.804286 | orchestrator | Tuesday 03 June 2025 15:32:52 +0000 (0:00:11.063) 0:00:35.658 ********** 2025-06-03 15:33:26.804295 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:33:26.804304 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:33:26.804313 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:33:26.804321 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:33:26.804330 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:33:26.804338 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:33:26.804347 | orchestrator | 2025-06-03 15:33:26.804356 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-06-03 15:33:26.804364 | orchestrator | Tuesday 03 June 2025 15:32:54 +0000 (0:00:01.754) 0:00:37.413 ********** 2025-06-03 15:33:26.804378 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:33:26.804387 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:33:26.804395 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:33:26.804404 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:33:26.804412 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:33:26.804421 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:33:26.804429 | orchestrator | 2025-06-03 15:33:26.804437 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2025-06-03 15:33:26.804446 | orchestrator | Tuesday 03 June 2025 15:33:01 +0000 (0:00:07.758) 0:00:45.171 ********** 2025-06-03 15:33:26.804455 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2025-06-03 15:33:26.804464 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2025-06-03 15:33:26.804472 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2025-06-03 15:33:26.804481 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2025-06-03 15:33:26.804490 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2025-06-03 15:33:26.804503 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2025-06-03 15:33:26.804512 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2025-06-03 15:33:26.804520 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2025-06-03 15:33:26.804529 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2025-06-03 15:33:26.804537 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2025-06-03 15:33:26.804546 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2025-06-03 15:33:26.804555 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2025-06-03 15:33:26.804563 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-06-03 15:33:26.804576 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-06-03 15:33:26.804585 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-06-03 15:33:26.804593 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-06-03 15:33:26.804660 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-06-03 15:33:26.804672 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-06-03 15:33:26.804681 | orchestrator | 2025-06-03 15:33:26.804690 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2025-06-03 15:33:26.804698 | orchestrator | Tuesday 03 June 2025 15:33:09 +0000 (0:00:07.636) 0:00:52.808 ********** 2025-06-03 15:33:26.804707 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2025-06-03 15:33:26.804715 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:33:26.804724 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2025-06-03 15:33:26.804733 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:33:26.804741 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2025-06-03 15:33:26.804749 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:33:26.804758 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2025-06-03 15:33:26.804774 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2025-06-03 15:33:26.804782 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2025-06-03 15:33:26.804790 | orchestrator | 2025-06-03 15:33:26.804799 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2025-06-03 15:33:26.804808 | orchestrator | Tuesday 03 June 2025 15:33:12 +0000 (0:00:02.464) 0:00:55.272 ********** 2025-06-03 15:33:26.804816 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2025-06-03 15:33:26.804825 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:33:26.804833 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2025-06-03 15:33:26.804842 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:33:26.804850 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2025-06-03 15:33:26.804859 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:33:26.804868 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2025-06-03 15:33:26.804876 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2025-06-03 15:33:26.804885 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2025-06-03 15:33:26.804893 | orchestrator | 2025-06-03 15:33:26.804902 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-06-03 15:33:26.804911 | orchestrator | Tuesday 03 June 2025 15:33:16 +0000 (0:00:04.068) 0:00:59.341 ********** 2025-06-03 15:33:26.804919 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:33:26.804928 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:33:26.804936 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:33:26.804944 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:33:26.804953 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:33:26.804961 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:33:26.804970 | orchestrator | 2025-06-03 15:33:26.804978 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-03 15:33:26.804987 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-03 15:33:26.804997 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-03 15:33:26.805005 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-03 15:33:26.805014 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-03 15:33:26.805022 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-03 15:33:26.805037 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-03 15:33:26.805046 | orchestrator | 2025-06-03 15:33:26.805054 | orchestrator | 2025-06-03 15:33:26.805063 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-03 15:33:26.805072 | orchestrator | Tuesday 03 June 2025 15:33:24 +0000 (0:00:08.097) 0:01:07.438 ********** 2025-06-03 15:33:26.805080 | orchestrator | =============================================================================== 2025-06-03 15:33:26.805089 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 15.86s 2025-06-03 15:33:26.805097 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------ 11.06s 2025-06-03 15:33:26.805106 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 7.64s 2025-06-03 15:33:26.805114 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 4.07s 2025-06-03 15:33:26.805123 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 3.97s 2025-06-03 15:33:26.805137 | orchestrator | openvswitch : Copying over config.json files for services --------------- 3.54s 2025-06-03 15:33:26.805145 | orchestrator | module-load : Drop module persistence ----------------------------------- 2.69s 2025-06-03 15:33:26.805158 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 2.55s 2025-06-03 15:33:26.805167 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.46s 2025-06-03 15:33:26.805175 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 2.08s 2025-06-03 15:33:26.805184 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.75s 2025-06-03 15:33:26.805193 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.72s 2025-06-03 15:33:26.805201 | orchestrator | module-load : Load modules ---------------------------------------------- 1.71s 2025-06-03 15:33:26.805209 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.57s 2025-06-03 15:33:26.805218 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 1.35s 2025-06-03 15:33:26.805226 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 1.29s 2025-06-03 15:33:26.805235 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.90s 2025-06-03 15:33:26.805243 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.78s 2025-06-03 15:33:26.805353 | orchestrator | 2025-06-03 15:33:26 | INFO  | Task 874e5012-af84-430b-ad3a-db8ab497054f is in state STARTED 2025-06-03 15:33:26.805366 | orchestrator | 2025-06-03 15:33:26 | INFO  | Task 2b42b3aa-4396-4a89-a666-075c05fdd1c6 is in state STARTED 2025-06-03 15:33:26.805375 | orchestrator | 2025-06-03 15:33:26 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:33:29.840160 | orchestrator | 2025-06-03 15:33:29 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:33:29.840980 | orchestrator | 2025-06-03 15:33:29 | INFO  | Task b2488e9b-0278-4b24-bce6-4fbe674b9626 is in state STARTED 2025-06-03 15:33:29.842627 | orchestrator | 2025-06-03 15:33:29 | INFO  | Task aa9d758e-4c93-4d11-a652-a4d1eba549e0 is in state STARTED 2025-06-03 15:33:29.843319 | orchestrator | 2025-06-03 15:33:29 | INFO  | Task 874e5012-af84-430b-ad3a-db8ab497054f is in state STARTED 2025-06-03 15:33:29.844426 | orchestrator | 2025-06-03 15:33:29 | INFO  | Task 2b42b3aa-4396-4a89-a666-075c05fdd1c6 is in state STARTED 2025-06-03 15:33:29.844464 | orchestrator | 2025-06-03 15:33:29 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:33:32.875532 | orchestrator | 2025-06-03 15:33:32 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:33:32.879985 | orchestrator | 2025-06-03 15:33:32 | INFO  | Task b2488e9b-0278-4b24-bce6-4fbe674b9626 is in state STARTED 2025-06-03 15:33:32.883185 | orchestrator | 2025-06-03 15:33:32 | INFO  | Task aa9d758e-4c93-4d11-a652-a4d1eba549e0 is in state STARTED 2025-06-03 15:33:32.883251 | orchestrator | 2025-06-03 15:33:32 | INFO  | Task 874e5012-af84-430b-ad3a-db8ab497054f is in state STARTED 2025-06-03 15:33:32.887094 | orchestrator | 2025-06-03 15:33:32 | INFO  | Task 2b42b3aa-4396-4a89-a666-075c05fdd1c6 is in state STARTED 2025-06-03 15:33:32.887197 | orchestrator | 2025-06-03 15:33:32 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:33:35.907398 | orchestrator | 2025-06-03 15:33:35 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:33:35.907520 | orchestrator | 2025-06-03 15:33:35 | INFO  | Task b2488e9b-0278-4b24-bce6-4fbe674b9626 is in state STARTED 2025-06-03 15:33:35.908985 | orchestrator | 2025-06-03 15:33:35 | INFO  | Task aa9d758e-4c93-4d11-a652-a4d1eba549e0 is in state STARTED 2025-06-03 15:33:35.909016 | orchestrator | 2025-06-03 15:33:35 | INFO  | Task 874e5012-af84-430b-ad3a-db8ab497054f is in state STARTED 2025-06-03 15:33:35.909155 | orchestrator | 2025-06-03 15:33:35 | INFO  | Task 2b42b3aa-4396-4a89-a666-075c05fdd1c6 is in state STARTED 2025-06-03 15:33:35.909481 | orchestrator | 2025-06-03 15:33:35 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:33:38.935830 | orchestrator | 2025-06-03 15:33:38 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:33:38.941376 | orchestrator | 2025-06-03 15:33:38 | INFO  | Task b2488e9b-0278-4b24-bce6-4fbe674b9626 is in state STARTED 2025-06-03 15:33:38.946341 | orchestrator | 2025-06-03 15:33:38 | INFO  | Task aa9d758e-4c93-4d11-a652-a4d1eba549e0 is in state STARTED 2025-06-03 15:33:38.949790 | orchestrator | 2025-06-03 15:33:38 | INFO  | Task 874e5012-af84-430b-ad3a-db8ab497054f is in state STARTED 2025-06-03 15:33:38.949850 | orchestrator | 2025-06-03 15:33:38 | INFO  | Task 2b42b3aa-4396-4a89-a666-075c05fdd1c6 is in state STARTED 2025-06-03 15:33:38.949867 | orchestrator | 2025-06-03 15:33:38 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:33:41.986482 | orchestrator | 2025-06-03 15:33:41 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:33:41.986576 | orchestrator | 2025-06-03 15:33:41 | INFO  | Task b2488e9b-0278-4b24-bce6-4fbe674b9626 is in state STARTED 2025-06-03 15:33:41.993279 | orchestrator | 2025-06-03 15:33:41 | INFO  | Task aa9d758e-4c93-4d11-a652-a4d1eba549e0 is in state STARTED 2025-06-03 15:33:41.993609 | orchestrator | 2025-06-03 15:33:41 | INFO  | Task 874e5012-af84-430b-ad3a-db8ab497054f is in state STARTED 2025-06-03 15:33:41.994309 | orchestrator | 2025-06-03 15:33:41 | INFO  | Task 2b42b3aa-4396-4a89-a666-075c05fdd1c6 is in state STARTED 2025-06-03 15:33:41.994355 | orchestrator | 2025-06-03 15:33:41 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:33:45.039449 | orchestrator | 2025-06-03 15:33:45 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:33:45.039537 | orchestrator | 2025-06-03 15:33:45 | INFO  | Task b2488e9b-0278-4b24-bce6-4fbe674b9626 is in state STARTED 2025-06-03 15:33:45.039546 | orchestrator | 2025-06-03 15:33:45 | INFO  | Task aa9d758e-4c93-4d11-a652-a4d1eba549e0 is in state STARTED 2025-06-03 15:33:45.039553 | orchestrator | 2025-06-03 15:33:45 | INFO  | Task 874e5012-af84-430b-ad3a-db8ab497054f is in state STARTED 2025-06-03 15:33:45.039560 | orchestrator | 2025-06-03 15:33:45 | INFO  | Task 2b42b3aa-4396-4a89-a666-075c05fdd1c6 is in state STARTED 2025-06-03 15:33:45.039566 | orchestrator | 2025-06-03 15:33:45 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:33:48.074484 | orchestrator | 2025-06-03 15:33:48 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:33:48.075351 | orchestrator | 2025-06-03 15:33:48 | INFO  | Task b2488e9b-0278-4b24-bce6-4fbe674b9626 is in state STARTED 2025-06-03 15:33:48.075384 | orchestrator | 2025-06-03 15:33:48 | INFO  | Task aa9d758e-4c93-4d11-a652-a4d1eba549e0 is in state STARTED 2025-06-03 15:33:48.076317 | orchestrator | 2025-06-03 15:33:48 | INFO  | Task 874e5012-af84-430b-ad3a-db8ab497054f is in state STARTED 2025-06-03 15:33:48.076902 | orchestrator | 2025-06-03 15:33:48 | INFO  | Task 2b42b3aa-4396-4a89-a666-075c05fdd1c6 is in state STARTED 2025-06-03 15:33:48.076926 | orchestrator | 2025-06-03 15:33:48 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:33:51.118563 | orchestrator | 2025-06-03 15:33:51 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:33:51.120766 | orchestrator | 2025-06-03 15:33:51 | INFO  | Task b2488e9b-0278-4b24-bce6-4fbe674b9626 is in state STARTED 2025-06-03 15:33:51.123120 | orchestrator | 2025-06-03 15:33:51 | INFO  | Task aa9d758e-4c93-4d11-a652-a4d1eba549e0 is in state STARTED 2025-06-03 15:33:51.125004 | orchestrator | 2025-06-03 15:33:51 | INFO  | Task 874e5012-af84-430b-ad3a-db8ab497054f is in state STARTED 2025-06-03 15:33:51.126472 | orchestrator | 2025-06-03 15:33:51 | INFO  | Task 2b42b3aa-4396-4a89-a666-075c05fdd1c6 is in state STARTED 2025-06-03 15:33:51.126893 | orchestrator | 2025-06-03 15:33:51 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:33:54.170998 | orchestrator | 2025-06-03 15:33:54 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:33:54.171351 | orchestrator | 2025-06-03 15:33:54 | INFO  | Task b2488e9b-0278-4b24-bce6-4fbe674b9626 is in state STARTED 2025-06-03 15:33:54.172594 | orchestrator | 2025-06-03 15:33:54 | INFO  | Task aa9d758e-4c93-4d11-a652-a4d1eba549e0 is in state STARTED 2025-06-03 15:33:54.175998 | orchestrator | 2025-06-03 15:33:54 | INFO  | Task 874e5012-af84-430b-ad3a-db8ab497054f is in state STARTED 2025-06-03 15:33:54.176064 | orchestrator | 2025-06-03 15:33:54 | INFO  | Task 2b42b3aa-4396-4a89-a666-075c05fdd1c6 is in state STARTED 2025-06-03 15:33:54.176080 | orchestrator | 2025-06-03 15:33:54 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:33:57.208817 | orchestrator | 2025-06-03 15:33:57 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:33:57.209007 | orchestrator | 2025-06-03 15:33:57 | INFO  | Task b2488e9b-0278-4b24-bce6-4fbe674b9626 is in state STARTED 2025-06-03 15:33:57.210153 | orchestrator | 2025-06-03 15:33:57 | INFO  | Task aa9d758e-4c93-4d11-a652-a4d1eba549e0 is in state STARTED 2025-06-03 15:33:57.210851 | orchestrator | 2025-06-03 15:33:57 | INFO  | Task 874e5012-af84-430b-ad3a-db8ab497054f is in state STARTED 2025-06-03 15:33:57.211737 | orchestrator | 2025-06-03 15:33:57 | INFO  | Task 2b42b3aa-4396-4a89-a666-075c05fdd1c6 is in state STARTED 2025-06-03 15:33:57.211777 | orchestrator | 2025-06-03 15:33:57 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:34:00.248435 | orchestrator | 2025-06-03 15:34:00 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:34:00.251251 | orchestrator | 2025-06-03 15:34:00 | INFO  | Task b2488e9b-0278-4b24-bce6-4fbe674b9626 is in state STARTED 2025-06-03 15:34:00.251318 | orchestrator | 2025-06-03 15:34:00 | INFO  | Task aa9d758e-4c93-4d11-a652-a4d1eba549e0 is in state STARTED 2025-06-03 15:34:00.252239 | orchestrator | 2025-06-03 15:34:00 | INFO  | Task 874e5012-af84-430b-ad3a-db8ab497054f is in state STARTED 2025-06-03 15:34:00.254855 | orchestrator | 2025-06-03 15:34:00 | INFO  | Task 2b42b3aa-4396-4a89-a666-075c05fdd1c6 is in state STARTED 2025-06-03 15:34:00.254893 | orchestrator | 2025-06-03 15:34:00 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:34:03.294326 | orchestrator | 2025-06-03 15:34:03 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:34:03.296350 | orchestrator | 2025-06-03 15:34:03 | INFO  | Task b2488e9b-0278-4b24-bce6-4fbe674b9626 is in state STARTED 2025-06-03 15:34:03.298488 | orchestrator | 2025-06-03 15:34:03 | INFO  | Task aa9d758e-4c93-4d11-a652-a4d1eba549e0 is in state STARTED 2025-06-03 15:34:03.300031 | orchestrator | 2025-06-03 15:34:03 | INFO  | Task 874e5012-af84-430b-ad3a-db8ab497054f is in state STARTED 2025-06-03 15:34:03.301361 | orchestrator | 2025-06-03 15:34:03 | INFO  | Task 2b42b3aa-4396-4a89-a666-075c05fdd1c6 is in state STARTED 2025-06-03 15:34:03.301403 | orchestrator | 2025-06-03 15:34:03 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:34:06.374829 | orchestrator | 2025-06-03 15:34:06 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:34:06.375804 | orchestrator | 2025-06-03 15:34:06 | INFO  | Task b2488e9b-0278-4b24-bce6-4fbe674b9626 is in state STARTED 2025-06-03 15:34:06.378005 | orchestrator | 2025-06-03 15:34:06 | INFO  | Task aa9d758e-4c93-4d11-a652-a4d1eba549e0 is in state STARTED 2025-06-03 15:34:06.378833 | orchestrator | 2025-06-03 15:34:06 | INFO  | Task 874e5012-af84-430b-ad3a-db8ab497054f is in state STARTED 2025-06-03 15:34:06.379518 | orchestrator | 2025-06-03 15:34:06 | INFO  | Task 2b42b3aa-4396-4a89-a666-075c05fdd1c6 is in state STARTED 2025-06-03 15:34:06.379756 | orchestrator | 2025-06-03 15:34:06 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:34:09.420337 | orchestrator | 2025-06-03 15:34:09 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:34:09.421547 | orchestrator | 2025-06-03 15:34:09 | INFO  | Task b2488e9b-0278-4b24-bce6-4fbe674b9626 is in state STARTED 2025-06-03 15:34:09.424086 | orchestrator | 2025-06-03 15:34:09 | INFO  | Task aa9d758e-4c93-4d11-a652-a4d1eba549e0 is in state STARTED 2025-06-03 15:34:09.426641 | orchestrator | 2025-06-03 15:34:09 | INFO  | Task 874e5012-af84-430b-ad3a-db8ab497054f is in state STARTED 2025-06-03 15:34:09.431277 | orchestrator | 2025-06-03 15:34:09 | INFO  | Task 2b42b3aa-4396-4a89-a666-075c05fdd1c6 is in state STARTED 2025-06-03 15:34:09.432110 | orchestrator | 2025-06-03 15:34:09 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:34:12.473601 | orchestrator | 2025-06-03 15:34:12 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:34:12.473819 | orchestrator | 2025-06-03 15:34:12 | INFO  | Task b2488e9b-0278-4b24-bce6-4fbe674b9626 is in state STARTED 2025-06-03 15:34:12.474986 | orchestrator | 2025-06-03 15:34:12 | INFO  | Task aa9d758e-4c93-4d11-a652-a4d1eba549e0 is in state STARTED 2025-06-03 15:34:12.475306 | orchestrator | 2025-06-03 15:34:12 | INFO  | Task 874e5012-af84-430b-ad3a-db8ab497054f is in state STARTED 2025-06-03 15:34:12.476008 | orchestrator | 2025-06-03 15:34:12 | INFO  | Task 2b42b3aa-4396-4a89-a666-075c05fdd1c6 is in state STARTED 2025-06-03 15:34:12.476041 | orchestrator | 2025-06-03 15:34:12 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:34:15.516045 | orchestrator | 2025-06-03 15:34:15 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:34:15.517311 | orchestrator | 2025-06-03 15:34:15 | INFO  | Task b2488e9b-0278-4b24-bce6-4fbe674b9626 is in state STARTED 2025-06-03 15:34:15.518759 | orchestrator | 2025-06-03 15:34:15 | INFO  | Task aa9d758e-4c93-4d11-a652-a4d1eba549e0 is in state STARTED 2025-06-03 15:34:15.521475 | orchestrator | 2025-06-03 15:34:15 | INFO  | Task 874e5012-af84-430b-ad3a-db8ab497054f is in state STARTED 2025-06-03 15:34:15.522547 | orchestrator | 2025-06-03 15:34:15 | INFO  | Task 2b42b3aa-4396-4a89-a666-075c05fdd1c6 is in state STARTED 2025-06-03 15:34:15.522741 | orchestrator | 2025-06-03 15:34:15 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:34:18.553291 | orchestrator | 2025-06-03 15:34:18 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:34:18.553429 | orchestrator | 2025-06-03 15:34:18 | INFO  | Task b2488e9b-0278-4b24-bce6-4fbe674b9626 is in state STARTED 2025-06-03 15:34:18.554389 | orchestrator | 2025-06-03 15:34:18 | INFO  | Task aa9d758e-4c93-4d11-a652-a4d1eba549e0 is in state STARTED 2025-06-03 15:34:18.555182 | orchestrator | 2025-06-03 15:34:18 | INFO  | Task 874e5012-af84-430b-ad3a-db8ab497054f is in state STARTED 2025-06-03 15:34:18.555904 | orchestrator | 2025-06-03 15:34:18 | INFO  | Task 2b42b3aa-4396-4a89-a666-075c05fdd1c6 is in state STARTED 2025-06-03 15:34:18.555975 | orchestrator | 2025-06-03 15:34:18 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:34:21.598729 | orchestrator | 2025-06-03 15:34:21 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:34:21.600253 | orchestrator | 2025-06-03 15:34:21 | INFO  | Task ce5c7d0c-d3cf-4e08-9aa6-97f4ebbb6135 is in state STARTED 2025-06-03 15:34:21.602377 | orchestrator | 2025-06-03 15:34:21 | INFO  | Task bccc5784-62e4-40e0-b3a7-9b9dc1645d20 is in state STARTED 2025-06-03 15:34:21.603938 | orchestrator | 2025-06-03 15:34:21 | INFO  | Task b2488e9b-0278-4b24-bce6-4fbe674b9626 is in state STARTED 2025-06-03 15:34:21.605730 | orchestrator | 2025-06-03 15:34:21 | INFO  | Task aa9d758e-4c93-4d11-a652-a4d1eba549e0 is in state SUCCESS 2025-06-03 15:34:21.607219 | orchestrator | 2025-06-03 15:34:21.607265 | orchestrator | 2025-06-03 15:34:21.607318 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2025-06-03 15:34:21.607333 | orchestrator | 2025-06-03 15:34:21.607343 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2025-06-03 15:34:21.607354 | orchestrator | Tuesday 03 June 2025 15:29:41 +0000 (0:00:00.189) 0:00:00.189 ********** 2025-06-03 15:34:21.607363 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:34:21.607374 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:34:21.607384 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:34:21.607394 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:34:21.607404 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:34:21.607414 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:34:21.607424 | orchestrator | 2025-06-03 15:34:21.607433 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2025-06-03 15:34:21.607443 | orchestrator | Tuesday 03 June 2025 15:29:41 +0000 (0:00:00.608) 0:00:00.797 ********** 2025-06-03 15:34:21.607453 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:34:21.607485 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:34:21.607496 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:34:21.607506 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:34:21.607516 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:34:21.607525 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:34:21.607535 | orchestrator | 2025-06-03 15:34:21.607544 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2025-06-03 15:34:21.607554 | orchestrator | Tuesday 03 June 2025 15:29:42 +0000 (0:00:00.662) 0:00:01.460 ********** 2025-06-03 15:34:21.607563 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:34:21.607573 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:34:21.607583 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:34:21.607593 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:34:21.607603 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:34:21.607612 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:34:21.607622 | orchestrator | 2025-06-03 15:34:21.607631 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2025-06-03 15:34:21.607642 | orchestrator | Tuesday 03 June 2025 15:29:43 +0000 (0:00:00.887) 0:00:02.348 ********** 2025-06-03 15:34:21.607679 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:34:21.607690 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:34:21.607699 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:34:21.607709 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:34:21.607718 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:34:21.607728 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:34:21.607737 | orchestrator | 2025-06-03 15:34:21.607747 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2025-06-03 15:34:21.607756 | orchestrator | Tuesday 03 June 2025 15:29:45 +0000 (0:00:02.039) 0:00:04.387 ********** 2025-06-03 15:34:21.607766 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:34:21.607775 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:34:21.607811 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:34:21.607822 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:34:21.607833 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:34:21.607844 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:34:21.607854 | orchestrator | 2025-06-03 15:34:21.607865 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2025-06-03 15:34:21.607875 | orchestrator | Tuesday 03 June 2025 15:29:46 +0000 (0:00:01.074) 0:00:05.461 ********** 2025-06-03 15:34:21.607884 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:34:21.607894 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:34:21.607904 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:34:21.607915 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:34:21.607925 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:34:21.607936 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:34:21.607946 | orchestrator | 2025-06-03 15:34:21.607956 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2025-06-03 15:34:21.607966 | orchestrator | Tuesday 03 June 2025 15:29:47 +0000 (0:00:00.924) 0:00:06.386 ********** 2025-06-03 15:34:21.607978 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:34:21.607988 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:34:21.607998 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:34:21.608008 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:34:21.608018 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:34:21.608027 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:34:21.608036 | orchestrator | 2025-06-03 15:34:21.608060 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2025-06-03 15:34:21.608069 | orchestrator | Tuesday 03 June 2025 15:29:48 +0000 (0:00:00.890) 0:00:07.277 ********** 2025-06-03 15:34:21.608078 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:34:21.608087 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:34:21.608096 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:34:21.608106 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:34:21.608115 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:34:21.608124 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:34:21.608133 | orchestrator | 2025-06-03 15:34:21.608143 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2025-06-03 15:34:21.608153 | orchestrator | Tuesday 03 June 2025 15:29:48 +0000 (0:00:00.720) 0:00:07.997 ********** 2025-06-03 15:34:21.608162 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-03 15:34:21.608172 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-03 15:34:21.608181 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:34:21.608192 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-03 15:34:21.608201 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-03 15:34:21.608210 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:34:21.608220 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-03 15:34:21.608229 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-03 15:34:21.608239 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:34:21.608248 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-03 15:34:21.608271 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-03 15:34:21.608281 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:34:21.608290 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-03 15:34:21.608299 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-03 15:34:21.608309 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:34:21.608319 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-03 15:34:21.608329 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-03 15:34:21.608347 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:34:21.608357 | orchestrator | 2025-06-03 15:34:21.608367 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2025-06-03 15:34:21.608376 | orchestrator | Tuesday 03 June 2025 15:29:49 +0000 (0:00:01.045) 0:00:09.042 ********** 2025-06-03 15:34:21.608386 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:34:21.608396 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:34:21.608405 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:34:21.608414 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:34:21.608424 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:34:21.608433 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:34:21.608442 | orchestrator | 2025-06-03 15:34:21.608452 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2025-06-03 15:34:21.608463 | orchestrator | Tuesday 03 June 2025 15:29:51 +0000 (0:00:01.371) 0:00:10.414 ********** 2025-06-03 15:34:21.608472 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:34:21.608481 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:34:21.608490 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:34:21.608499 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:34:21.608508 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:34:21.608517 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:34:21.608527 | orchestrator | 2025-06-03 15:34:21.608536 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2025-06-03 15:34:21.608545 | orchestrator | Tuesday 03 June 2025 15:29:51 +0000 (0:00:00.650) 0:00:11.064 ********** 2025-06-03 15:34:21.608555 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:34:21.608564 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:34:21.608573 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:34:21.608582 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:34:21.608591 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:34:21.608600 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:34:21.608610 | orchestrator | 2025-06-03 15:34:21.608619 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2025-06-03 15:34:21.608629 | orchestrator | Tuesday 03 June 2025 15:29:58 +0000 (0:00:06.458) 0:00:17.523 ********** 2025-06-03 15:34:21.608639 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:34:21.608648 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:34:21.608763 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:34:21.608774 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:34:21.608785 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:34:21.608796 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:34:21.608805 | orchestrator | 2025-06-03 15:34:21.608816 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2025-06-03 15:34:21.608826 | orchestrator | Tuesday 03 June 2025 15:29:59 +0000 (0:00:01.006) 0:00:18.529 ********** 2025-06-03 15:34:21.608835 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:34:21.608845 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:34:21.608855 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:34:21.608866 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:34:21.608876 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:34:21.608886 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:34:21.608896 | orchestrator | 2025-06-03 15:34:21.608907 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2025-06-03 15:34:21.608918 | orchestrator | Tuesday 03 June 2025 15:30:01 +0000 (0:00:01.999) 0:00:20.528 ********** 2025-06-03 15:34:21.608928 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:34:21.608939 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:34:21.608948 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:34:21.608958 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:34:21.608977 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:34:21.608988 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:34:21.609010 | orchestrator | 2025-06-03 15:34:21.609021 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2025-06-03 15:34:21.609032 | orchestrator | Tuesday 03 June 2025 15:30:02 +0000 (0:00:00.974) 0:00:21.503 ********** 2025-06-03 15:34:21.609042 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2025-06-03 15:34:21.609052 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2025-06-03 15:34:21.609061 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:34:21.609070 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2025-06-03 15:34:21.609081 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2025-06-03 15:34:21.609092 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:34:21.609102 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2025-06-03 15:34:21.609113 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2025-06-03 15:34:21.609124 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:34:21.609135 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2025-06-03 15:34:21.609147 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2025-06-03 15:34:21.609157 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:34:21.609165 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2025-06-03 15:34:21.609173 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2025-06-03 15:34:21.609182 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:34:21.609192 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2025-06-03 15:34:21.609207 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2025-06-03 15:34:21.609221 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:34:21.609236 | orchestrator | 2025-06-03 15:34:21.609251 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2025-06-03 15:34:21.609282 | orchestrator | Tuesday 03 June 2025 15:30:03 +0000 (0:00:01.239) 0:00:22.744 ********** 2025-06-03 15:34:21.609295 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:34:21.609309 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:34:21.609321 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:34:21.609335 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:34:21.609347 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:34:21.609361 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:34:21.609374 | orchestrator | 2025-06-03 15:34:21.609387 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2025-06-03 15:34:21.609399 | orchestrator | 2025-06-03 15:34:21.609413 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2025-06-03 15:34:21.609424 | orchestrator | Tuesday 03 June 2025 15:30:05 +0000 (0:00:01.739) 0:00:24.484 ********** 2025-06-03 15:34:21.609437 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:34:21.609451 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:34:21.609464 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:34:21.609476 | orchestrator | 2025-06-03 15:34:21.609489 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2025-06-03 15:34:21.609501 | orchestrator | Tuesday 03 June 2025 15:30:06 +0000 (0:00:01.212) 0:00:25.697 ********** 2025-06-03 15:34:21.609513 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:34:21.609525 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:34:21.609537 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:34:21.609549 | orchestrator | 2025-06-03 15:34:21.609562 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2025-06-03 15:34:21.609628 | orchestrator | Tuesday 03 June 2025 15:30:07 +0000 (0:00:01.152) 0:00:26.849 ********** 2025-06-03 15:34:21.609645 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:34:21.609684 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:34:21.609698 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:34:21.609713 | orchestrator | 2025-06-03 15:34:21.609721 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2025-06-03 15:34:21.609729 | orchestrator | Tuesday 03 June 2025 15:30:08 +0000 (0:00:00.960) 0:00:27.810 ********** 2025-06-03 15:34:21.609737 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:34:21.609794 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:34:21.609812 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:34:21.609827 | orchestrator | 2025-06-03 15:34:21.609841 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2025-06-03 15:34:21.609856 | orchestrator | Tuesday 03 June 2025 15:30:09 +0000 (0:00:00.749) 0:00:28.560 ********** 2025-06-03 15:34:21.609872 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:34:21.609889 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:34:21.609905 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:34:21.609923 | orchestrator | 2025-06-03 15:34:21.609940 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2025-06-03 15:34:21.609957 | orchestrator | Tuesday 03 June 2025 15:30:09 +0000 (0:00:00.303) 0:00:28.864 ********** 2025-06-03 15:34:21.609975 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:34:21.609995 | orchestrator | 2025-06-03 15:34:21.610010 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2025-06-03 15:34:21.610091 | orchestrator | Tuesday 03 June 2025 15:30:10 +0000 (0:00:00.698) 0:00:29.562 ********** 2025-06-03 15:34:21.610109 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:34:21.610138 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:34:21.610156 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:34:21.610173 | orchestrator | 2025-06-03 15:34:21.610191 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2025-06-03 15:34:21.610209 | orchestrator | Tuesday 03 June 2025 15:30:13 +0000 (0:00:03.477) 0:00:33.039 ********** 2025-06-03 15:34:21.610228 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:34:21.610246 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:34:21.610265 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:34:21.610282 | orchestrator | 2025-06-03 15:34:21.610301 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2025-06-03 15:34:21.610317 | orchestrator | Tuesday 03 June 2025 15:30:14 +0000 (0:00:00.795) 0:00:33.835 ********** 2025-06-03 15:34:21.610335 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:34:21.610353 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:34:21.610370 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:34:21.610379 | orchestrator | 2025-06-03 15:34:21.610397 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2025-06-03 15:34:21.610405 | orchestrator | Tuesday 03 June 2025 15:30:15 +0000 (0:00:00.816) 0:00:34.651 ********** 2025-06-03 15:34:21.610413 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:34:21.610422 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:34:21.610430 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:34:21.610438 | orchestrator | 2025-06-03 15:34:21.610446 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2025-06-03 15:34:21.610454 | orchestrator | Tuesday 03 June 2025 15:30:17 +0000 (0:00:02.022) 0:00:36.673 ********** 2025-06-03 15:34:21.610462 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:34:21.610471 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:34:21.610480 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:34:21.610489 | orchestrator | 2025-06-03 15:34:21.610499 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2025-06-03 15:34:21.610509 | orchestrator | Tuesday 03 June 2025 15:30:17 +0000 (0:00:00.277) 0:00:36.951 ********** 2025-06-03 15:34:21.610519 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:34:21.610530 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:34:21.610540 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:34:21.610551 | orchestrator | 2025-06-03 15:34:21.610560 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2025-06-03 15:34:21.610570 | orchestrator | Tuesday 03 June 2025 15:30:18 +0000 (0:00:00.333) 0:00:37.285 ********** 2025-06-03 15:34:21.610580 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:34:21.610590 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:34:21.610599 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:34:21.610619 | orchestrator | 2025-06-03 15:34:21.610628 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2025-06-03 15:34:21.610638 | orchestrator | Tuesday 03 June 2025 15:30:20 +0000 (0:00:01.927) 0:00:39.213 ********** 2025-06-03 15:34:21.610686 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-06-03 15:34:21.610699 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-06-03 15:34:21.610737 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-06-03 15:34:21.610748 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-06-03 15:34:21.610756 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-06-03 15:34:21.610765 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-06-03 15:34:21.610774 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-06-03 15:34:21.610783 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-06-03 15:34:21.610793 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-06-03 15:34:21.610803 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-06-03 15:34:21.610812 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-06-03 15:34:21.610822 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-06-03 15:34:21.610831 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-06-03 15:34:21.610841 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-06-03 15:34:21.610850 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-06-03 15:34:21.610860 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:34:21.610870 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:34:21.610879 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:34:21.610888 | orchestrator | 2025-06-03 15:34:21.610897 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2025-06-03 15:34:21.610906 | orchestrator | Tuesday 03 June 2025 15:31:16 +0000 (0:00:56.029) 0:01:35.243 ********** 2025-06-03 15:34:21.610915 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:34:21.610924 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:34:21.610934 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:34:21.610943 | orchestrator | 2025-06-03 15:34:21.610952 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2025-06-03 15:34:21.610961 | orchestrator | Tuesday 03 June 2025 15:31:16 +0000 (0:00:00.336) 0:01:35.579 ********** 2025-06-03 15:34:21.610970 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:34:21.610979 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:34:21.610988 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:34:21.610997 | orchestrator | 2025-06-03 15:34:21.611013 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2025-06-03 15:34:21.611033 | orchestrator | Tuesday 03 June 2025 15:31:17 +0000 (0:00:01.009) 0:01:36.589 ********** 2025-06-03 15:34:21.611042 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:34:21.611051 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:34:21.611061 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:34:21.611070 | orchestrator | 2025-06-03 15:34:21.611079 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2025-06-03 15:34:21.611089 | orchestrator | Tuesday 03 June 2025 15:31:18 +0000 (0:00:01.232) 0:01:37.822 ********** 2025-06-03 15:34:21.611098 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:34:21.611107 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:34:21.611116 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:34:21.611125 | orchestrator | 2025-06-03 15:34:21.611135 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2025-06-03 15:34:21.611144 | orchestrator | Tuesday 03 June 2025 15:31:34 +0000 (0:00:15.949) 0:01:53.771 ********** 2025-06-03 15:34:21.611154 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:34:21.611163 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:34:21.611172 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:34:21.611182 | orchestrator | 2025-06-03 15:34:21.611191 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2025-06-03 15:34:21.611201 | orchestrator | Tuesday 03 June 2025 15:31:35 +0000 (0:00:00.788) 0:01:54.560 ********** 2025-06-03 15:34:21.611211 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:34:21.611220 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:34:21.611230 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:34:21.611239 | orchestrator | 2025-06-03 15:34:21.611249 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2025-06-03 15:34:21.611259 | orchestrator | Tuesday 03 June 2025 15:31:36 +0000 (0:00:00.630) 0:01:55.190 ********** 2025-06-03 15:34:21.611269 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:34:21.611278 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:34:21.611288 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:34:21.611298 | orchestrator | 2025-06-03 15:34:21.611319 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2025-06-03 15:34:21.611330 | orchestrator | Tuesday 03 June 2025 15:31:36 +0000 (0:00:00.664) 0:01:55.855 ********** 2025-06-03 15:34:21.611340 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:34:21.611350 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:34:21.611359 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:34:21.611368 | orchestrator | 2025-06-03 15:34:21.611378 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2025-06-03 15:34:21.611387 | orchestrator | Tuesday 03 June 2025 15:31:37 +0000 (0:00:00.920) 0:01:56.775 ********** 2025-06-03 15:34:21.611397 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:34:21.611406 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:34:21.611416 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:34:21.611425 | orchestrator | 2025-06-03 15:34:21.611434 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2025-06-03 15:34:21.611443 | orchestrator | Tuesday 03 June 2025 15:31:38 +0000 (0:00:00.355) 0:01:57.130 ********** 2025-06-03 15:34:21.611452 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:34:21.611461 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:34:21.611470 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:34:21.611479 | orchestrator | 2025-06-03 15:34:21.611488 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2025-06-03 15:34:21.611497 | orchestrator | Tuesday 03 June 2025 15:31:38 +0000 (0:00:00.749) 0:01:57.880 ********** 2025-06-03 15:34:21.611506 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:34:21.611515 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:34:21.611524 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:34:21.611534 | orchestrator | 2025-06-03 15:34:21.611544 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2025-06-03 15:34:21.611553 | orchestrator | Tuesday 03 June 2025 15:31:39 +0000 (0:00:00.833) 0:01:58.714 ********** 2025-06-03 15:34:21.611571 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:34:21.611581 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:34:21.611590 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:34:21.611600 | orchestrator | 2025-06-03 15:34:21.611609 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2025-06-03 15:34:21.611619 | orchestrator | Tuesday 03 June 2025 15:31:40 +0000 (0:00:01.258) 0:01:59.972 ********** 2025-06-03 15:34:21.611646 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:34:21.611672 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:34:21.611682 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:34:21.611692 | orchestrator | 2025-06-03 15:34:21.611701 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2025-06-03 15:34:21.611711 | orchestrator | Tuesday 03 June 2025 15:31:41 +0000 (0:00:00.926) 0:02:00.899 ********** 2025-06-03 15:34:21.611721 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:34:21.611731 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:34:21.611740 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:34:21.611750 | orchestrator | 2025-06-03 15:34:21.611760 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2025-06-03 15:34:21.611770 | orchestrator | Tuesday 03 June 2025 15:31:42 +0000 (0:00:00.275) 0:02:01.174 ********** 2025-06-03 15:34:21.611780 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:34:21.611789 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:34:21.611799 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:34:21.611808 | orchestrator | 2025-06-03 15:34:21.611817 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2025-06-03 15:34:21.611827 | orchestrator | Tuesday 03 June 2025 15:31:42 +0000 (0:00:00.318) 0:02:01.493 ********** 2025-06-03 15:34:21.611837 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:34:21.611847 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:34:21.611856 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:34:21.611865 | orchestrator | 2025-06-03 15:34:21.611875 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2025-06-03 15:34:21.611884 | orchestrator | Tuesday 03 June 2025 15:31:43 +0000 (0:00:01.197) 0:02:02.691 ********** 2025-06-03 15:34:21.611894 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:34:21.611903 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:34:21.611912 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:34:21.611921 | orchestrator | 2025-06-03 15:34:21.611931 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2025-06-03 15:34:21.611941 | orchestrator | Tuesday 03 June 2025 15:31:44 +0000 (0:00:00.688) 0:02:03.380 ********** 2025-06-03 15:34:21.611951 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-06-03 15:34:21.611961 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-06-03 15:34:21.611970 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-06-03 15:34:21.611980 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-06-03 15:34:21.611989 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-06-03 15:34:21.611999 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-06-03 15:34:21.612009 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-06-03 15:34:21.612018 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-06-03 15:34:21.612027 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-06-03 15:34:21.612036 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2025-06-03 15:34:21.612618 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-06-03 15:34:21.612702 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-06-03 15:34:21.612729 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2025-06-03 15:34:21.612740 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-06-03 15:34:21.612750 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-06-03 15:34:21.612760 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-06-03 15:34:21.612770 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-06-03 15:34:21.612780 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-06-03 15:34:21.612790 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-06-03 15:34:21.612799 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-06-03 15:34:21.612809 | orchestrator | 2025-06-03 15:34:21.612818 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2025-06-03 15:34:21.612827 | orchestrator | 2025-06-03 15:34:21.612836 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2025-06-03 15:34:21.612846 | orchestrator | Tuesday 03 June 2025 15:31:47 +0000 (0:00:02.995) 0:02:06.375 ********** 2025-06-03 15:34:21.612856 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:34:21.612866 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:34:21.612875 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:34:21.612884 | orchestrator | 2025-06-03 15:34:21.612893 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2025-06-03 15:34:21.612903 | orchestrator | Tuesday 03 June 2025 15:31:47 +0000 (0:00:00.558) 0:02:06.934 ********** 2025-06-03 15:34:21.612911 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:34:21.612920 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:34:21.612929 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:34:21.612938 | orchestrator | 2025-06-03 15:34:21.612947 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2025-06-03 15:34:21.612956 | orchestrator | Tuesday 03 June 2025 15:31:48 +0000 (0:00:00.605) 0:02:07.539 ********** 2025-06-03 15:34:21.612965 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:34:21.612974 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:34:21.612983 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:34:21.612992 | orchestrator | 2025-06-03 15:34:21.613002 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2025-06-03 15:34:21.613010 | orchestrator | Tuesday 03 June 2025 15:31:48 +0000 (0:00:00.321) 0:02:07.861 ********** 2025-06-03 15:34:21.613020 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-03 15:34:21.613029 | orchestrator | 2025-06-03 15:34:21.613039 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2025-06-03 15:34:21.613047 | orchestrator | Tuesday 03 June 2025 15:31:49 +0000 (0:00:00.674) 0:02:08.536 ********** 2025-06-03 15:34:21.613055 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:34:21.613064 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:34:21.613073 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:34:21.613082 | orchestrator | 2025-06-03 15:34:21.613091 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2025-06-03 15:34:21.613112 | orchestrator | Tuesday 03 June 2025 15:31:49 +0000 (0:00:00.318) 0:02:08.855 ********** 2025-06-03 15:34:21.613122 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:34:21.613132 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:34:21.613142 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:34:21.613153 | orchestrator | 2025-06-03 15:34:21.613162 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2025-06-03 15:34:21.613182 | orchestrator | Tuesday 03 June 2025 15:31:50 +0000 (0:00:00.340) 0:02:09.195 ********** 2025-06-03 15:34:21.613191 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:34:21.613200 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:34:21.613210 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:34:21.613219 | orchestrator | 2025-06-03 15:34:21.613229 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2025-06-03 15:34:21.613238 | orchestrator | Tuesday 03 June 2025 15:31:50 +0000 (0:00:00.340) 0:02:09.535 ********** 2025-06-03 15:34:21.613248 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:34:21.613257 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:34:21.613267 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:34:21.613276 | orchestrator | 2025-06-03 15:34:21.613286 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2025-06-03 15:34:21.613295 | orchestrator | Tuesday 03 June 2025 15:31:52 +0000 (0:00:01.725) 0:02:11.260 ********** 2025-06-03 15:34:21.613305 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:34:21.613315 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:34:21.613325 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:34:21.613334 | orchestrator | 2025-06-03 15:34:21.613344 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-06-03 15:34:21.613354 | orchestrator | 2025-06-03 15:34:21.613363 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-06-03 15:34:21.613374 | orchestrator | Tuesday 03 June 2025 15:32:00 +0000 (0:00:08.144) 0:02:19.405 ********** 2025-06-03 15:34:21.613383 | orchestrator | ok: [testbed-manager] 2025-06-03 15:34:21.613393 | orchestrator | 2025-06-03 15:34:21.613402 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-06-03 15:34:21.613412 | orchestrator | Tuesday 03 June 2025 15:32:01 +0000 (0:00:00.817) 0:02:20.222 ********** 2025-06-03 15:34:21.613423 | orchestrator | changed: [testbed-manager] 2025-06-03 15:34:21.613433 | orchestrator | 2025-06-03 15:34:21.613443 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-06-03 15:34:21.613452 | orchestrator | Tuesday 03 June 2025 15:32:01 +0000 (0:00:00.472) 0:02:20.695 ********** 2025-06-03 15:34:21.613462 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-06-03 15:34:21.613471 | orchestrator | 2025-06-03 15:34:21.613491 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-06-03 15:34:21.613501 | orchestrator | Tuesday 03 June 2025 15:32:02 +0000 (0:00:01.019) 0:02:21.714 ********** 2025-06-03 15:34:21.613510 | orchestrator | changed: [testbed-manager] 2025-06-03 15:34:21.613519 | orchestrator | 2025-06-03 15:34:21.613528 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-06-03 15:34:21.613538 | orchestrator | Tuesday 03 June 2025 15:32:03 +0000 (0:00:00.902) 0:02:22.617 ********** 2025-06-03 15:34:21.613547 | orchestrator | changed: [testbed-manager] 2025-06-03 15:34:21.613556 | orchestrator | 2025-06-03 15:34:21.613565 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-06-03 15:34:21.613575 | orchestrator | Tuesday 03 June 2025 15:32:04 +0000 (0:00:00.590) 0:02:23.208 ********** 2025-06-03 15:34:21.613584 | orchestrator | changed: [testbed-manager -> localhost] 2025-06-03 15:34:21.613594 | orchestrator | 2025-06-03 15:34:21.613603 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-06-03 15:34:21.613613 | orchestrator | Tuesday 03 June 2025 15:32:05 +0000 (0:00:01.650) 0:02:24.858 ********** 2025-06-03 15:34:21.613623 | orchestrator | changed: [testbed-manager -> localhost] 2025-06-03 15:34:21.613633 | orchestrator | 2025-06-03 15:34:21.613643 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-06-03 15:34:21.613709 | orchestrator | Tuesday 03 June 2025 15:32:06 +0000 (0:00:00.901) 0:02:25.760 ********** 2025-06-03 15:34:21.613721 | orchestrator | changed: [testbed-manager] 2025-06-03 15:34:21.613732 | orchestrator | 2025-06-03 15:34:21.613742 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-06-03 15:34:21.613752 | orchestrator | Tuesday 03 June 2025 15:32:07 +0000 (0:00:00.560) 0:02:26.321 ********** 2025-06-03 15:34:21.613771 | orchestrator | changed: [testbed-manager] 2025-06-03 15:34:21.613780 | orchestrator | 2025-06-03 15:34:21.613790 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2025-06-03 15:34:21.613799 | orchestrator | 2025-06-03 15:34:21.613809 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2025-06-03 15:34:21.613818 | orchestrator | Tuesday 03 June 2025 15:32:07 +0000 (0:00:00.508) 0:02:26.829 ********** 2025-06-03 15:34:21.613827 | orchestrator | ok: [testbed-manager] 2025-06-03 15:34:21.613837 | orchestrator | 2025-06-03 15:34:21.613847 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2025-06-03 15:34:21.613857 | orchestrator | Tuesday 03 June 2025 15:32:07 +0000 (0:00:00.144) 0:02:26.974 ********** 2025-06-03 15:34:21.613866 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2025-06-03 15:34:21.613876 | orchestrator | 2025-06-03 15:34:21.613886 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2025-06-03 15:34:21.613895 | orchestrator | Tuesday 03 June 2025 15:32:08 +0000 (0:00:00.478) 0:02:27.453 ********** 2025-06-03 15:34:21.613905 | orchestrator | ok: [testbed-manager] 2025-06-03 15:34:21.613915 | orchestrator | 2025-06-03 15:34:21.613924 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2025-06-03 15:34:21.613934 | orchestrator | Tuesday 03 June 2025 15:32:09 +0000 (0:00:00.881) 0:02:28.334 ********** 2025-06-03 15:34:21.613944 | orchestrator | ok: [testbed-manager] 2025-06-03 15:34:21.613954 | orchestrator | 2025-06-03 15:34:21.613963 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2025-06-03 15:34:21.613973 | orchestrator | Tuesday 03 June 2025 15:32:11 +0000 (0:00:01.744) 0:02:30.079 ********** 2025-06-03 15:34:21.613982 | orchestrator | changed: [testbed-manager] 2025-06-03 15:34:21.613992 | orchestrator | 2025-06-03 15:34:21.614008 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2025-06-03 15:34:21.614069 | orchestrator | Tuesday 03 June 2025 15:32:11 +0000 (0:00:00.869) 0:02:30.949 ********** 2025-06-03 15:34:21.614079 | orchestrator | ok: [testbed-manager] 2025-06-03 15:34:21.614089 | orchestrator | 2025-06-03 15:34:21.614099 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2025-06-03 15:34:21.614109 | orchestrator | Tuesday 03 June 2025 15:32:12 +0000 (0:00:00.507) 0:02:31.456 ********** 2025-06-03 15:34:21.614119 | orchestrator | changed: [testbed-manager] 2025-06-03 15:34:21.614129 | orchestrator | 2025-06-03 15:34:21.614139 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2025-06-03 15:34:21.614150 | orchestrator | Tuesday 03 June 2025 15:32:18 +0000 (0:00:06.585) 0:02:38.041 ********** 2025-06-03 15:34:21.614160 | orchestrator | changed: [testbed-manager] 2025-06-03 15:34:21.614171 | orchestrator | 2025-06-03 15:34:21.614182 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2025-06-03 15:34:21.614193 | orchestrator | Tuesday 03 June 2025 15:32:31 +0000 (0:00:12.504) 0:02:50.545 ********** 2025-06-03 15:34:21.614204 | orchestrator | ok: [testbed-manager] 2025-06-03 15:34:21.614214 | orchestrator | 2025-06-03 15:34:21.614224 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2025-06-03 15:34:21.614234 | orchestrator | 2025-06-03 15:34:21.614244 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2025-06-03 15:34:21.614255 | orchestrator | Tuesday 03 June 2025 15:32:32 +0000 (0:00:00.528) 0:02:51.074 ********** 2025-06-03 15:34:21.614265 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:34:21.614276 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:34:21.614286 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:34:21.614296 | orchestrator | 2025-06-03 15:34:21.614305 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2025-06-03 15:34:21.614314 | orchestrator | Tuesday 03 June 2025 15:32:32 +0000 (0:00:00.589) 0:02:51.663 ********** 2025-06-03 15:34:21.614323 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:34:21.614334 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:34:21.614350 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:34:21.614360 | orchestrator | 2025-06-03 15:34:21.614369 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2025-06-03 15:34:21.614378 | orchestrator | Tuesday 03 June 2025 15:32:32 +0000 (0:00:00.332) 0:02:51.996 ********** 2025-06-03 15:34:21.614389 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:34:21.614399 | orchestrator | 2025-06-03 15:34:21.614409 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2025-06-03 15:34:21.614430 | orchestrator | Tuesday 03 June 2025 15:32:33 +0000 (0:00:00.538) 0:02:52.534 ********** 2025-06-03 15:34:21.614441 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-06-03 15:34:21.614451 | orchestrator | 2025-06-03 15:34:21.614461 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2025-06-03 15:34:21.614471 | orchestrator | Tuesday 03 June 2025 15:32:34 +0000 (0:00:01.284) 0:02:53.819 ********** 2025-06-03 15:34:21.614480 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-03 15:34:21.614489 | orchestrator | 2025-06-03 15:34:21.614498 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2025-06-03 15:34:21.614508 | orchestrator | Tuesday 03 June 2025 15:32:35 +0000 (0:00:00.901) 0:02:54.720 ********** 2025-06-03 15:34:21.614516 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:34:21.614526 | orchestrator | 2025-06-03 15:34:21.614535 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2025-06-03 15:34:21.614545 | orchestrator | Tuesday 03 June 2025 15:32:35 +0000 (0:00:00.212) 0:02:54.932 ********** 2025-06-03 15:34:21.614554 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-03 15:34:21.614563 | orchestrator | 2025-06-03 15:34:21.614572 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2025-06-03 15:34:21.614582 | orchestrator | Tuesday 03 June 2025 15:32:36 +0000 (0:00:01.107) 0:02:56.040 ********** 2025-06-03 15:34:21.614591 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:34:21.614601 | orchestrator | 2025-06-03 15:34:21.614610 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2025-06-03 15:34:21.614620 | orchestrator | Tuesday 03 June 2025 15:32:37 +0000 (0:00:00.186) 0:02:56.227 ********** 2025-06-03 15:34:21.614629 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:34:21.614639 | orchestrator | 2025-06-03 15:34:21.614648 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2025-06-03 15:34:21.614717 | orchestrator | Tuesday 03 June 2025 15:32:37 +0000 (0:00:00.219) 0:02:56.447 ********** 2025-06-03 15:34:21.614727 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:34:21.614736 | orchestrator | 2025-06-03 15:34:21.614744 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2025-06-03 15:34:21.614753 | orchestrator | Tuesday 03 June 2025 15:32:37 +0000 (0:00:00.214) 0:02:56.662 ********** 2025-06-03 15:34:21.614762 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:34:21.614771 | orchestrator | 2025-06-03 15:34:21.614780 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2025-06-03 15:34:21.614790 | orchestrator | Tuesday 03 June 2025 15:32:37 +0000 (0:00:00.220) 0:02:56.882 ********** 2025-06-03 15:34:21.614799 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-06-03 15:34:21.614809 | orchestrator | 2025-06-03 15:34:21.614819 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2025-06-03 15:34:21.614828 | orchestrator | Tuesday 03 June 2025 15:32:42 +0000 (0:00:04.527) 0:03:01.409 ********** 2025-06-03 15:34:21.614838 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2025-06-03 15:34:21.614848 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (30 retries left). 2025-06-03 15:34:21.614858 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2025-06-03 15:34:21.614867 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2025-06-03 15:34:21.614886 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2025-06-03 15:34:21.614895 | orchestrator | 2025-06-03 15:34:21.614910 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2025-06-03 15:34:21.614919 | orchestrator | Tuesday 03 June 2025 15:33:52 +0000 (0:01:10.502) 0:04:11.912 ********** 2025-06-03 15:34:21.614928 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-03 15:34:21.614937 | orchestrator | 2025-06-03 15:34:21.614946 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2025-06-03 15:34:21.614955 | orchestrator | Tuesday 03 June 2025 15:33:54 +0000 (0:00:01.180) 0:04:13.092 ********** 2025-06-03 15:34:21.614963 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-06-03 15:34:21.614972 | orchestrator | 2025-06-03 15:34:21.614980 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2025-06-03 15:34:21.614988 | orchestrator | Tuesday 03 June 2025 15:33:55 +0000 (0:00:01.530) 0:04:14.623 ********** 2025-06-03 15:34:21.614997 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-06-03 15:34:21.615006 | orchestrator | 2025-06-03 15:34:21.615014 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2025-06-03 15:34:21.615023 | orchestrator | Tuesday 03 June 2025 15:33:56 +0000 (0:00:01.431) 0:04:16.054 ********** 2025-06-03 15:34:21.615053 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:34:21.615061 | orchestrator | 2025-06-03 15:34:21.615069 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2025-06-03 15:34:21.615078 | orchestrator | Tuesday 03 June 2025 15:33:57 +0000 (0:00:00.172) 0:04:16.227 ********** 2025-06-03 15:34:21.615086 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2025-06-03 15:34:21.615094 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2025-06-03 15:34:21.615102 | orchestrator | 2025-06-03 15:34:21.615110 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2025-06-03 15:34:21.615117 | orchestrator | Tuesday 03 June 2025 15:33:59 +0000 (0:00:02.163) 0:04:18.391 ********** 2025-06-03 15:34:21.615124 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:34:21.615132 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:34:21.615140 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:34:21.615148 | orchestrator | 2025-06-03 15:34:21.615157 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2025-06-03 15:34:21.615166 | orchestrator | Tuesday 03 June 2025 15:33:59 +0000 (0:00:00.342) 0:04:18.733 ********** 2025-06-03 15:34:21.615174 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:34:21.615182 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:34:21.615191 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:34:21.615199 | orchestrator | 2025-06-03 15:34:21.615217 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2025-06-03 15:34:21.615226 | orchestrator | 2025-06-03 15:34:21.615234 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2025-06-03 15:34:21.615242 | orchestrator | Tuesday 03 June 2025 15:34:00 +0000 (0:00:00.906) 0:04:19.639 ********** 2025-06-03 15:34:21.615250 | orchestrator | ok: [testbed-manager] 2025-06-03 15:34:21.615258 | orchestrator | 2025-06-03 15:34:21.615265 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2025-06-03 15:34:21.615273 | orchestrator | Tuesday 03 June 2025 15:34:00 +0000 (0:00:00.357) 0:04:19.997 ********** 2025-06-03 15:34:21.615281 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2025-06-03 15:34:21.615289 | orchestrator | 2025-06-03 15:34:21.615297 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2025-06-03 15:34:21.615305 | orchestrator | Tuesday 03 June 2025 15:34:01 +0000 (0:00:00.227) 0:04:20.224 ********** 2025-06-03 15:34:21.615314 | orchestrator | changed: [testbed-manager] 2025-06-03 15:34:21.615322 | orchestrator | 2025-06-03 15:34:21.615330 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2025-06-03 15:34:21.615347 | orchestrator | 2025-06-03 15:34:21.615355 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2025-06-03 15:34:21.615363 | orchestrator | Tuesday 03 June 2025 15:34:07 +0000 (0:00:06.515) 0:04:26.740 ********** 2025-06-03 15:34:21.615371 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:34:21.615379 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:34:21.615387 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:34:21.615396 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:34:21.615404 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:34:21.615412 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:34:21.615420 | orchestrator | 2025-06-03 15:34:21.615429 | orchestrator | TASK [Manage labels] *********************************************************** 2025-06-03 15:34:21.615437 | orchestrator | Tuesday 03 June 2025 15:34:08 +0000 (0:00:00.744) 0:04:27.484 ********** 2025-06-03 15:34:21.615445 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-06-03 15:34:21.615454 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-06-03 15:34:21.615463 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-06-03 15:34:21.615472 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-06-03 15:34:21.615481 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-06-03 15:34:21.615490 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-06-03 15:34:21.615499 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-06-03 15:34:21.615508 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-06-03 15:34:21.615518 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-06-03 15:34:21.615527 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-06-03 15:34:21.615536 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2025-06-03 15:34:21.615551 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2025-06-03 15:34:21.615559 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2025-06-03 15:34:21.615567 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-06-03 15:34:21.615574 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-06-03 15:34:21.615582 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-06-03 15:34:21.615590 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-06-03 15:34:21.615598 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-06-03 15:34:21.615606 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-06-03 15:34:21.615614 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-06-03 15:34:21.615623 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-06-03 15:34:21.615631 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-06-03 15:34:21.615639 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-06-03 15:34:21.615647 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-06-03 15:34:21.615680 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-06-03 15:34:21.615689 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-06-03 15:34:21.615698 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-06-03 15:34:21.615714 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-06-03 15:34:21.615722 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-06-03 15:34:21.615730 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-06-03 15:34:21.615739 | orchestrator | 2025-06-03 15:34:21.615756 | orchestrator | TASK [Manage annotations] ****************************************************** 2025-06-03 15:34:21.615766 | orchestrator | Tuesday 03 June 2025 15:34:17 +0000 (0:00:09.561) 0:04:37.046 ********** 2025-06-03 15:34:21.615774 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:34:21.615783 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:34:21.615791 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:34:21.615800 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:34:21.615808 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:34:21.615815 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:34:21.615824 | orchestrator | 2025-06-03 15:34:21.615832 | orchestrator | TASK [Manage taints] *********************************************************** 2025-06-03 15:34:21.615841 | orchestrator | Tuesday 03 June 2025 15:34:18 +0000 (0:00:00.446) 0:04:37.492 ********** 2025-06-03 15:34:21.615849 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:34:21.615858 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:34:21.615866 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:34:21.615875 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:34:21.615884 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:34:21.615893 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:34:21.615901 | orchestrator | 2025-06-03 15:34:21.615909 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-03 15:34:21.615917 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-03 15:34:21.615928 | orchestrator | testbed-node-0 : ok=46  changed=21  unreachable=0 failed=0 skipped=27  rescued=0 ignored=0 2025-06-03 15:34:21.615937 | orchestrator | testbed-node-1 : ok=34  changed=14  unreachable=0 failed=0 skipped=24  rescued=0 ignored=0 2025-06-03 15:34:21.615945 | orchestrator | testbed-node-2 : ok=34  changed=14  unreachable=0 failed=0 skipped=24  rescued=0 ignored=0 2025-06-03 15:34:21.615953 | orchestrator | testbed-node-3 : ok=14  changed=6  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-06-03 15:34:21.615962 | orchestrator | testbed-node-4 : ok=14  changed=6  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-06-03 15:34:21.615970 | orchestrator | testbed-node-5 : ok=14  changed=6  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-06-03 15:34:21.615977 | orchestrator | 2025-06-03 15:34:21.615986 | orchestrator | 2025-06-03 15:34:21.615994 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-03 15:34:21.616002 | orchestrator | Tuesday 03 June 2025 15:34:19 +0000 (0:00:00.640) 0:04:38.133 ********** 2025-06-03 15:34:21.616011 | orchestrator | =============================================================================== 2025-06-03 15:34:21.616019 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 70.50s 2025-06-03 15:34:21.616034 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 56.03s 2025-06-03 15:34:21.616042 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 15.95s 2025-06-03 15:34:21.616050 | orchestrator | kubectl : Install required packages ------------------------------------ 12.50s 2025-06-03 15:34:21.616058 | orchestrator | Manage labels ----------------------------------------------------------- 9.56s 2025-06-03 15:34:21.616073 | orchestrator | k3s_agent : Manage k3s service ------------------------------------------ 8.14s 2025-06-03 15:34:21.616082 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 6.58s 2025-06-03 15:34:21.616089 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 6.52s 2025-06-03 15:34:21.616097 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 6.46s 2025-06-03 15:34:21.616104 | orchestrator | k3s_server_post : Install Cilium ---------------------------------------- 4.53s 2025-06-03 15:34:21.616111 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 3.48s 2025-06-03 15:34:21.616119 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 3.00s 2025-06-03 15:34:21.616127 | orchestrator | k3s_server_post : Test for BGP config resources ------------------------- 2.16s 2025-06-03 15:34:21.616134 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.04s 2025-06-03 15:34:21.616141 | orchestrator | k3s_server : Copy vip manifest to first master -------------------------- 2.02s 2025-06-03 15:34:21.616149 | orchestrator | k3s_download : Download k3s binary armhf -------------------------------- 2.00s 2025-06-03 15:34:21.616157 | orchestrator | k3s_server : Init cluster inside the transient k3s-init service --------- 1.93s 2025-06-03 15:34:21.616165 | orchestrator | kubectl : Install apt-transport-https package --------------------------- 1.74s 2025-06-03 15:34:21.616173 | orchestrator | k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml --- 1.74s 2025-06-03 15:34:21.616181 | orchestrator | k3s_agent : Configure the k3s service ----------------------------------- 1.73s 2025-06-03 15:34:21.616189 | orchestrator | 2025-06-03 15:34:21 | INFO  | Task 874e5012-af84-430b-ad3a-db8ab497054f is in state STARTED 2025-06-03 15:34:21.616224 | orchestrator | 2025-06-03 15:34:21 | INFO  | Task 2b42b3aa-4396-4a89-a666-075c05fdd1c6 is in state STARTED 2025-06-03 15:34:21.616234 | orchestrator | 2025-06-03 15:34:21 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:34:24.648872 | orchestrator | 2025-06-03 15:34:24 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:34:24.648951 | orchestrator | 2025-06-03 15:34:24 | INFO  | Task ce5c7d0c-d3cf-4e08-9aa6-97f4ebbb6135 is in state STARTED 2025-06-03 15:34:24.649722 | orchestrator | 2025-06-03 15:34:24 | INFO  | Task bccc5784-62e4-40e0-b3a7-9b9dc1645d20 is in state STARTED 2025-06-03 15:34:24.652086 | orchestrator | 2025-06-03 15:34:24 | INFO  | Task b2488e9b-0278-4b24-bce6-4fbe674b9626 is in state STARTED 2025-06-03 15:34:24.654625 | orchestrator | 2025-06-03 15:34:24 | INFO  | Task 874e5012-af84-430b-ad3a-db8ab497054f is in state STARTED 2025-06-03 15:34:24.654698 | orchestrator | 2025-06-03 15:34:24 | INFO  | Task 2b42b3aa-4396-4a89-a666-075c05fdd1c6 is in state STARTED 2025-06-03 15:34:24.654706 | orchestrator | 2025-06-03 15:34:24 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:34:27.700907 | orchestrator | 2025-06-03 15:34:27 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:34:27.701002 | orchestrator | 2025-06-03 15:34:27 | INFO  | Task ce5c7d0c-d3cf-4e08-9aa6-97f4ebbb6135 is in state STARTED 2025-06-03 15:34:27.705329 | orchestrator | 2025-06-03 15:34:27 | INFO  | Task bccc5784-62e4-40e0-b3a7-9b9dc1645d20 is in state SUCCESS 2025-06-03 15:34:27.705885 | orchestrator | 2025-06-03 15:34:27 | INFO  | Task b2488e9b-0278-4b24-bce6-4fbe674b9626 is in state STARTED 2025-06-03 15:34:27.706781 | orchestrator | 2025-06-03 15:34:27 | INFO  | Task 874e5012-af84-430b-ad3a-db8ab497054f is in state STARTED 2025-06-03 15:34:27.708474 | orchestrator | 2025-06-03 15:34:27 | INFO  | Task 2b42b3aa-4396-4a89-a666-075c05fdd1c6 is in state STARTED 2025-06-03 15:34:27.709382 | orchestrator | 2025-06-03 15:34:27 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:34:30.748232 | orchestrator | 2025-06-03 15:34:30 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:34:30.749444 | orchestrator | 2025-06-03 15:34:30 | INFO  | Task ce5c7d0c-d3cf-4e08-9aa6-97f4ebbb6135 is in state STARTED 2025-06-03 15:34:30.751699 | orchestrator | 2025-06-03 15:34:30 | INFO  | Task b2488e9b-0278-4b24-bce6-4fbe674b9626 is in state STARTED 2025-06-03 15:34:30.752968 | orchestrator | 2025-06-03 15:34:30 | INFO  | Task 874e5012-af84-430b-ad3a-db8ab497054f is in state STARTED 2025-06-03 15:34:30.754397 | orchestrator | 2025-06-03 15:34:30 | INFO  | Task 2b42b3aa-4396-4a89-a666-075c05fdd1c6 is in state STARTED 2025-06-03 15:34:30.754721 | orchestrator | 2025-06-03 15:34:30 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:34:33.816745 | orchestrator | 2025-06-03 15:34:33 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:34:33.816875 | orchestrator | 2025-06-03 15:34:33 | INFO  | Task ce5c7d0c-d3cf-4e08-9aa6-97f4ebbb6135 is in state SUCCESS 2025-06-03 15:34:33.817643 | orchestrator | 2025-06-03 15:34:33 | INFO  | Task b2488e9b-0278-4b24-bce6-4fbe674b9626 is in state STARTED 2025-06-03 15:34:33.820537 | orchestrator | 2025-06-03 15:34:33 | INFO  | Task 874e5012-af84-430b-ad3a-db8ab497054f is in state STARTED 2025-06-03 15:34:33.821953 | orchestrator | 2025-06-03 15:34:33 | INFO  | Task 2b42b3aa-4396-4a89-a666-075c05fdd1c6 is in state STARTED 2025-06-03 15:34:33.821997 | orchestrator | 2025-06-03 15:34:33 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:34:36.871842 | orchestrator | 2025-06-03 15:34:36 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:34:36.872781 | orchestrator | 2025-06-03 15:34:36 | INFO  | Task b2488e9b-0278-4b24-bce6-4fbe674b9626 is in state STARTED 2025-06-03 15:34:36.874954 | orchestrator | 2025-06-03 15:34:36 | INFO  | Task 874e5012-af84-430b-ad3a-db8ab497054f is in state STARTED 2025-06-03 15:34:36.877329 | orchestrator | 2025-06-03 15:34:36 | INFO  | Task 2b42b3aa-4396-4a89-a666-075c05fdd1c6 is in state STARTED 2025-06-03 15:34:36.877368 | orchestrator | 2025-06-03 15:34:36 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:34:39.909326 | orchestrator | 2025-06-03 15:34:39 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:34:39.910795 | orchestrator | 2025-06-03 15:34:39 | INFO  | Task b2488e9b-0278-4b24-bce6-4fbe674b9626 is in state STARTED 2025-06-03 15:34:39.911838 | orchestrator | 2025-06-03 15:34:39 | INFO  | Task 874e5012-af84-430b-ad3a-db8ab497054f is in state STARTED 2025-06-03 15:34:39.912943 | orchestrator | 2025-06-03 15:34:39 | INFO  | Task 2b42b3aa-4396-4a89-a666-075c05fdd1c6 is in state STARTED 2025-06-03 15:34:39.913090 | orchestrator | 2025-06-03 15:34:39 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:34:42.953140 | orchestrator | 2025-06-03 15:34:42 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:34:42.954012 | orchestrator | 2025-06-03 15:34:42 | INFO  | Task b2488e9b-0278-4b24-bce6-4fbe674b9626 is in state STARTED 2025-06-03 15:34:42.955721 | orchestrator | 2025-06-03 15:34:42 | INFO  | Task 874e5012-af84-430b-ad3a-db8ab497054f is in state STARTED 2025-06-03 15:34:42.958133 | orchestrator | 2025-06-03 15:34:42 | INFO  | Task 2b42b3aa-4396-4a89-a666-075c05fdd1c6 is in state STARTED 2025-06-03 15:34:42.958166 | orchestrator | 2025-06-03 15:34:42 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:34:45.996092 | orchestrator | 2025-06-03 15:34:45 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:34:45.997339 | orchestrator | 2025-06-03 15:34:45 | INFO  | Task b2488e9b-0278-4b24-bce6-4fbe674b9626 is in state STARTED 2025-06-03 15:34:45.998232 | orchestrator | 2025-06-03 15:34:45 | INFO  | Task 874e5012-af84-430b-ad3a-db8ab497054f is in state STARTED 2025-06-03 15:34:46.000872 | orchestrator | 2025-06-03 15:34:45 | INFO  | Task 2b42b3aa-4396-4a89-a666-075c05fdd1c6 is in state STARTED 2025-06-03 15:34:46.001343 | orchestrator | 2025-06-03 15:34:45 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:34:49.043321 | orchestrator | 2025-06-03 15:34:49 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:34:49.043407 | orchestrator | 2025-06-03 15:34:49 | INFO  | Task b2488e9b-0278-4b24-bce6-4fbe674b9626 is in state STARTED 2025-06-03 15:34:49.048542 | orchestrator | 2025-06-03 15:34:49 | INFO  | Task 874e5012-af84-430b-ad3a-db8ab497054f is in state STARTED 2025-06-03 15:34:49.049616 | orchestrator | 2025-06-03 15:34:49 | INFO  | Task 2b42b3aa-4396-4a89-a666-075c05fdd1c6 is in state STARTED 2025-06-03 15:34:49.049684 | orchestrator | 2025-06-03 15:34:49 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:34:52.092522 | orchestrator | 2025-06-03 15:34:52 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:34:52.093597 | orchestrator | 2025-06-03 15:34:52 | INFO  | Task b2488e9b-0278-4b24-bce6-4fbe674b9626 is in state STARTED 2025-06-03 15:34:52.094900 | orchestrator | 2025-06-03 15:34:52 | INFO  | Task 874e5012-af84-430b-ad3a-db8ab497054f is in state STARTED 2025-06-03 15:34:52.096188 | orchestrator | 2025-06-03 15:34:52 | INFO  | Task 2b42b3aa-4396-4a89-a666-075c05fdd1c6 is in state STARTED 2025-06-03 15:34:52.096749 | orchestrator | 2025-06-03 15:34:52 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:34:55.126816 | orchestrator | 2025-06-03 15:34:55 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:34:55.131952 | orchestrator | 2025-06-03 15:34:55.132020 | orchestrator | 2025-06-03 15:34:55.132026 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2025-06-03 15:34:55.132032 | orchestrator | 2025-06-03 15:34:55.132036 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-06-03 15:34:55.132041 | orchestrator | Tuesday 03 June 2025 15:34:23 +0000 (0:00:00.166) 0:00:00.166 ********** 2025-06-03 15:34:55.132046 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-06-03 15:34:55.132051 | orchestrator | 2025-06-03 15:34:55.132055 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-06-03 15:34:55.132059 | orchestrator | Tuesday 03 June 2025 15:34:24 +0000 (0:00:00.782) 0:00:00.948 ********** 2025-06-03 15:34:55.132075 | orchestrator | changed: [testbed-manager] 2025-06-03 15:34:55.132080 | orchestrator | 2025-06-03 15:34:55.132084 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2025-06-03 15:34:55.132088 | orchestrator | Tuesday 03 June 2025 15:34:25 +0000 (0:00:01.221) 0:00:02.170 ********** 2025-06-03 15:34:55.132092 | orchestrator | changed: [testbed-manager] 2025-06-03 15:34:55.132096 | orchestrator | 2025-06-03 15:34:55.132100 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-03 15:34:55.132104 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-03 15:34:55.132112 | orchestrator | 2025-06-03 15:34:55.132118 | orchestrator | 2025-06-03 15:34:55.132124 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-03 15:34:55.132131 | orchestrator | Tuesday 03 June 2025 15:34:25 +0000 (0:00:00.427) 0:00:02.598 ********** 2025-06-03 15:34:55.132139 | orchestrator | =============================================================================== 2025-06-03 15:34:55.132147 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.22s 2025-06-03 15:34:55.132176 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.78s 2025-06-03 15:34:55.132182 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.43s 2025-06-03 15:34:55.132187 | orchestrator | 2025-06-03 15:34:55.132192 | orchestrator | 2025-06-03 15:34:55.132198 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-06-03 15:34:55.132204 | orchestrator | 2025-06-03 15:34:55.132210 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-06-03 15:34:55.132215 | orchestrator | Tuesday 03 June 2025 15:34:24 +0000 (0:00:00.179) 0:00:00.179 ********** 2025-06-03 15:34:55.132220 | orchestrator | ok: [testbed-manager] 2025-06-03 15:34:55.132226 | orchestrator | 2025-06-03 15:34:55.132231 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-06-03 15:34:55.132237 | orchestrator | Tuesday 03 June 2025 15:34:24 +0000 (0:00:00.637) 0:00:00.816 ********** 2025-06-03 15:34:55.132244 | orchestrator | ok: [testbed-manager] 2025-06-03 15:34:55.132250 | orchestrator | 2025-06-03 15:34:55.132256 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-06-03 15:34:55.132262 | orchestrator | Tuesday 03 June 2025 15:34:25 +0000 (0:00:00.608) 0:00:01.425 ********** 2025-06-03 15:34:55.132268 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-06-03 15:34:55.132274 | orchestrator | 2025-06-03 15:34:55.132280 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-06-03 15:34:55.132284 | orchestrator | Tuesday 03 June 2025 15:34:26 +0000 (0:00:00.700) 0:00:02.125 ********** 2025-06-03 15:34:55.132288 | orchestrator | changed: [testbed-manager] 2025-06-03 15:34:55.132292 | orchestrator | 2025-06-03 15:34:55.132296 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-06-03 15:34:55.132300 | orchestrator | Tuesday 03 June 2025 15:34:27 +0000 (0:00:00.856) 0:00:02.981 ********** 2025-06-03 15:34:55.132303 | orchestrator | changed: [testbed-manager] 2025-06-03 15:34:55.132307 | orchestrator | 2025-06-03 15:34:55.132311 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-06-03 15:34:55.132315 | orchestrator | Tuesday 03 June 2025 15:34:27 +0000 (0:00:00.837) 0:00:03.819 ********** 2025-06-03 15:34:55.132319 | orchestrator | changed: [testbed-manager -> localhost] 2025-06-03 15:34:55.132322 | orchestrator | 2025-06-03 15:34:55.132326 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-06-03 15:34:55.132330 | orchestrator | Tuesday 03 June 2025 15:34:29 +0000 (0:00:01.347) 0:00:05.166 ********** 2025-06-03 15:34:55.132334 | orchestrator | changed: [testbed-manager -> localhost] 2025-06-03 15:34:55.132338 | orchestrator | 2025-06-03 15:34:55.132342 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-06-03 15:34:55.132345 | orchestrator | Tuesday 03 June 2025 15:34:30 +0000 (0:00:00.763) 0:00:05.929 ********** 2025-06-03 15:34:55.132349 | orchestrator | ok: [testbed-manager] 2025-06-03 15:34:55.132353 | orchestrator | 2025-06-03 15:34:55.132357 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-06-03 15:34:55.132361 | orchestrator | Tuesday 03 June 2025 15:34:30 +0000 (0:00:00.348) 0:00:06.278 ********** 2025-06-03 15:34:55.132364 | orchestrator | ok: [testbed-manager] 2025-06-03 15:34:55.132368 | orchestrator | 2025-06-03 15:34:55.132372 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-03 15:34:55.132376 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-03 15:34:55.132379 | orchestrator | 2025-06-03 15:34:55.132383 | orchestrator | 2025-06-03 15:34:55.132388 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-03 15:34:55.132394 | orchestrator | Tuesday 03 June 2025 15:34:30 +0000 (0:00:00.270) 0:00:06.550 ********** 2025-06-03 15:34:55.132400 | orchestrator | =============================================================================== 2025-06-03 15:34:55.132413 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.35s 2025-06-03 15:34:55.132426 | orchestrator | Write kubeconfig file --------------------------------------------------- 0.86s 2025-06-03 15:34:55.132430 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.84s 2025-06-03 15:34:55.132445 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 0.76s 2025-06-03 15:34:55.132460 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.70s 2025-06-03 15:34:55.132464 | orchestrator | Get home directory of operator user ------------------------------------- 0.64s 2025-06-03 15:34:55.132468 | orchestrator | Create .kube directory -------------------------------------------------- 0.61s 2025-06-03 15:34:55.132472 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.35s 2025-06-03 15:34:55.132475 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.27s 2025-06-03 15:34:55.132479 | orchestrator | 2025-06-03 15:34:55.132483 | orchestrator | 2025-06-03 15:34:55.132487 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2025-06-03 15:34:55.132490 | orchestrator | 2025-06-03 15:34:55.132494 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-06-03 15:34:55.132498 | orchestrator | Tuesday 03 June 2025 15:32:41 +0000 (0:00:00.154) 0:00:00.154 ********** 2025-06-03 15:34:55.132502 | orchestrator | ok: [localhost] => { 2025-06-03 15:34:55.132506 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2025-06-03 15:34:55.132511 | orchestrator | } 2025-06-03 15:34:55.132515 | orchestrator | 2025-06-03 15:34:55.132519 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2025-06-03 15:34:55.132522 | orchestrator | Tuesday 03 June 2025 15:32:41 +0000 (0:00:00.039) 0:00:00.194 ********** 2025-06-03 15:34:55.132528 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2025-06-03 15:34:55.132534 | orchestrator | ...ignoring 2025-06-03 15:34:55.132538 | orchestrator | 2025-06-03 15:34:55.132541 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2025-06-03 15:34:55.132545 | orchestrator | Tuesday 03 June 2025 15:32:44 +0000 (0:00:03.824) 0:00:04.018 ********** 2025-06-03 15:34:55.132549 | orchestrator | skipping: [localhost] 2025-06-03 15:34:55.132553 | orchestrator | 2025-06-03 15:34:55.132556 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2025-06-03 15:34:55.132560 | orchestrator | Tuesday 03 June 2025 15:32:44 +0000 (0:00:00.049) 0:00:04.067 ********** 2025-06-03 15:34:55.132564 | orchestrator | ok: [localhost] 2025-06-03 15:34:55.132568 | orchestrator | 2025-06-03 15:34:55.132572 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-03 15:34:55.132575 | orchestrator | 2025-06-03 15:34:55.132579 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-03 15:34:55.132583 | orchestrator | Tuesday 03 June 2025 15:32:45 +0000 (0:00:00.142) 0:00:04.209 ********** 2025-06-03 15:34:55.132587 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:34:55.132590 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:34:55.132594 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:34:55.132598 | orchestrator | 2025-06-03 15:34:55.132602 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-03 15:34:55.132605 | orchestrator | Tuesday 03 June 2025 15:32:45 +0000 (0:00:00.358) 0:00:04.568 ********** 2025-06-03 15:34:55.132609 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2025-06-03 15:34:55.132614 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2025-06-03 15:34:55.132617 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2025-06-03 15:34:55.132621 | orchestrator | 2025-06-03 15:34:55.132625 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2025-06-03 15:34:55.132629 | orchestrator | 2025-06-03 15:34:55.132632 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-06-03 15:34:55.132640 | orchestrator | Tuesday 03 June 2025 15:32:46 +0000 (0:00:00.542) 0:00:05.111 ********** 2025-06-03 15:34:55.132644 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:34:55.132648 | orchestrator | 2025-06-03 15:34:55.132652 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-06-03 15:34:55.132678 | orchestrator | Tuesday 03 June 2025 15:32:46 +0000 (0:00:00.537) 0:00:05.648 ********** 2025-06-03 15:34:55.132682 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:34:55.132686 | orchestrator | 2025-06-03 15:34:55.132689 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2025-06-03 15:34:55.132693 | orchestrator | Tuesday 03 June 2025 15:32:47 +0000 (0:00:01.025) 0:00:06.673 ********** 2025-06-03 15:34:55.132697 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:34:55.132701 | orchestrator | 2025-06-03 15:34:55.132704 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2025-06-03 15:34:55.132708 | orchestrator | Tuesday 03 June 2025 15:32:47 +0000 (0:00:00.355) 0:00:07.029 ********** 2025-06-03 15:34:55.132712 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:34:55.132716 | orchestrator | 2025-06-03 15:34:55.132719 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2025-06-03 15:34:55.132723 | orchestrator | Tuesday 03 June 2025 15:32:48 +0000 (0:00:00.370) 0:00:07.399 ********** 2025-06-03 15:34:55.132727 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:34:55.132731 | orchestrator | 2025-06-03 15:34:55.132734 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2025-06-03 15:34:55.132738 | orchestrator | Tuesday 03 June 2025 15:32:48 +0000 (0:00:00.365) 0:00:07.764 ********** 2025-06-03 15:34:55.132742 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:34:55.132746 | orchestrator | 2025-06-03 15:34:55.132749 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-06-03 15:34:55.132753 | orchestrator | Tuesday 03 June 2025 15:32:49 +0000 (0:00:00.481) 0:00:08.246 ********** 2025-06-03 15:34:55.132761 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:34:55.132765 | orchestrator | 2025-06-03 15:34:55.132769 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-06-03 15:34:55.132776 | orchestrator | Tuesday 03 June 2025 15:32:50 +0000 (0:00:00.992) 0:00:09.239 ********** 2025-06-03 15:34:55.132779 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:34:55.132783 | orchestrator | 2025-06-03 15:34:55.132787 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2025-06-03 15:34:55.132791 | orchestrator | Tuesday 03 June 2025 15:32:50 +0000 (0:00:00.798) 0:00:10.038 ********** 2025-06-03 15:34:55.132795 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:34:55.132798 | orchestrator | 2025-06-03 15:34:55.132802 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2025-06-03 15:34:55.132806 | orchestrator | Tuesday 03 June 2025 15:32:51 +0000 (0:00:00.437) 0:00:10.475 ********** 2025-06-03 15:34:55.132810 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:34:55.132813 | orchestrator | 2025-06-03 15:34:55.132817 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2025-06-03 15:34:55.132821 | orchestrator | Tuesday 03 June 2025 15:32:51 +0000 (0:00:00.434) 0:00:10.910 ********** 2025-06-03 15:34:55.132828 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-03 15:34:55.132839 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-03 15:34:55.132844 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-03 15:34:55.132848 | orchestrator | 2025-06-03 15:34:55.132855 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2025-06-03 15:34:55.132859 | orchestrator | Tuesday 03 June 2025 15:32:53 +0000 (0:00:01.284) 0:00:12.194 ********** 2025-06-03 15:34:55.132866 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-03 15:34:55.132871 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-03 15:34:55.132879 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-03 15:34:55.132883 | orchestrator | 2025-06-03 15:34:55.132887 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2025-06-03 15:34:55.132890 | orchestrator | Tuesday 03 June 2025 15:32:55 +0000 (0:00:02.448) 0:00:14.643 ********** 2025-06-03 15:34:55.132894 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-06-03 15:34:55.132899 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-06-03 15:34:55.132902 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-06-03 15:34:55.132916 | orchestrator | 2025-06-03 15:34:55.132920 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2025-06-03 15:34:55.132924 | orchestrator | Tuesday 03 June 2025 15:32:57 +0000 (0:00:01.992) 0:00:16.635 ********** 2025-06-03 15:34:55.132928 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-06-03 15:34:55.132935 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-06-03 15:34:55.132939 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-06-03 15:34:55.132942 | orchestrator | 2025-06-03 15:34:55.132946 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2025-06-03 15:34:55.132953 | orchestrator | Tuesday 03 June 2025 15:32:59 +0000 (0:00:01.714) 0:00:18.350 ********** 2025-06-03 15:34:55.132956 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-06-03 15:34:55.132960 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-06-03 15:34:55.132964 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-06-03 15:34:55.132968 | orchestrator | 2025-06-03 15:34:55.132972 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2025-06-03 15:34:55.132976 | orchestrator | Tuesday 03 June 2025 15:33:00 +0000 (0:00:01.419) 0:00:19.770 ********** 2025-06-03 15:34:55.132982 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-06-03 15:34:55.132986 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-06-03 15:34:55.132990 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-06-03 15:34:55.132994 | orchestrator | 2025-06-03 15:34:55.132998 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2025-06-03 15:34:55.133002 | orchestrator | Tuesday 03 June 2025 15:33:02 +0000 (0:00:01.615) 0:00:21.386 ********** 2025-06-03 15:34:55.133005 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-06-03 15:34:55.133009 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-06-03 15:34:55.133013 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-06-03 15:34:55.133017 | orchestrator | 2025-06-03 15:34:55.133021 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2025-06-03 15:34:55.133024 | orchestrator | Tuesday 03 June 2025 15:33:04 +0000 (0:00:01.908) 0:00:23.294 ********** 2025-06-03 15:34:55.133028 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-06-03 15:34:55.133032 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-06-03 15:34:55.133036 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-06-03 15:34:55.133040 | orchestrator | 2025-06-03 15:34:55.133043 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-06-03 15:34:55.133047 | orchestrator | Tuesday 03 June 2025 15:33:05 +0000 (0:00:01.685) 0:00:24.979 ********** 2025-06-03 15:34:55.133051 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:34:55.133055 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:34:55.133058 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:34:55.133062 | orchestrator | 2025-06-03 15:34:55.133066 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2025-06-03 15:34:55.133070 | orchestrator | Tuesday 03 June 2025 15:33:06 +0000 (0:00:00.364) 0:00:25.344 ********** 2025-06-03 15:34:55.133074 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-03 15:34:55.133084 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-03 15:34:55.133099 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-03 15:34:55.133104 | orchestrator | 2025-06-03 15:34:55.133108 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2025-06-03 15:34:55.133111 | orchestrator | Tuesday 03 June 2025 15:33:07 +0000 (0:00:01.316) 0:00:26.661 ********** 2025-06-03 15:34:55.133115 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:34:55.133119 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:34:55.133123 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:34:55.133127 | orchestrator | 2025-06-03 15:34:55.133130 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2025-06-03 15:34:55.133134 | orchestrator | Tuesday 03 June 2025 15:33:08 +0000 (0:00:00.865) 0:00:27.526 ********** 2025-06-03 15:34:55.133138 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:34:55.133142 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:34:55.133145 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:34:55.133149 | orchestrator | 2025-06-03 15:34:55.133153 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2025-06-03 15:34:55.133157 | orchestrator | Tuesday 03 June 2025 15:33:15 +0000 (0:00:06.652) 0:00:34.178 ********** 2025-06-03 15:34:55.133160 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:34:55.133164 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:34:55.133175 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:34:55.133179 | orchestrator | 2025-06-03 15:34:55.133183 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-06-03 15:34:55.133187 | orchestrator | 2025-06-03 15:34:55.133191 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-06-03 15:34:55.133194 | orchestrator | Tuesday 03 June 2025 15:33:15 +0000 (0:00:00.455) 0:00:34.634 ********** 2025-06-03 15:34:55.133198 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:34:55.133202 | orchestrator | 2025-06-03 15:34:55.133206 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-06-03 15:34:55.133209 | orchestrator | Tuesday 03 June 2025 15:33:16 +0000 (0:00:00.600) 0:00:35.234 ********** 2025-06-03 15:34:55.133213 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:34:55.133217 | orchestrator | 2025-06-03 15:34:55.133221 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-06-03 15:34:55.133224 | orchestrator | Tuesday 03 June 2025 15:33:16 +0000 (0:00:00.294) 0:00:35.529 ********** 2025-06-03 15:34:55.133234 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:34:55.133238 | orchestrator | 2025-06-03 15:34:55.133242 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-06-03 15:34:55.133250 | orchestrator | Tuesday 03 June 2025 15:33:19 +0000 (0:00:02.996) 0:00:38.525 ********** 2025-06-03 15:34:55.133253 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:34:55.133257 | orchestrator | 2025-06-03 15:34:55.133269 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-06-03 15:34:55.133273 | orchestrator | 2025-06-03 15:34:55.133277 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-06-03 15:34:55.133287 | orchestrator | Tuesday 03 June 2025 15:34:13 +0000 (0:00:54.533) 0:01:33.059 ********** 2025-06-03 15:34:55.133291 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:34:55.133295 | orchestrator | 2025-06-03 15:34:55.133299 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-06-03 15:34:55.133303 | orchestrator | Tuesday 03 June 2025 15:34:14 +0000 (0:00:00.664) 0:01:33.723 ********** 2025-06-03 15:34:55.133306 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:34:55.133310 | orchestrator | 2025-06-03 15:34:55.133314 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-06-03 15:34:55.133318 | orchestrator | Tuesday 03 June 2025 15:34:14 +0000 (0:00:00.340) 0:01:34.063 ********** 2025-06-03 15:34:55.133324 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:34:55.133331 | orchestrator | 2025-06-03 15:34:55.133337 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-06-03 15:34:55.133343 | orchestrator | Tuesday 03 June 2025 15:34:16 +0000 (0:00:01.824) 0:01:35.887 ********** 2025-06-03 15:34:55.133348 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:34:55.133354 | orchestrator | 2025-06-03 15:34:55.133361 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-06-03 15:34:55.133367 | orchestrator | 2025-06-03 15:34:55.133374 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-06-03 15:34:55.133380 | orchestrator | Tuesday 03 June 2025 15:34:31 +0000 (0:00:14.588) 0:01:50.476 ********** 2025-06-03 15:34:55.133387 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:34:55.133391 | orchestrator | 2025-06-03 15:34:55.133398 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-06-03 15:34:55.133402 | orchestrator | Tuesday 03 June 2025 15:34:32 +0000 (0:00:00.662) 0:01:51.139 ********** 2025-06-03 15:34:55.133406 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:34:55.133409 | orchestrator | 2025-06-03 15:34:55.133413 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-06-03 15:34:55.133417 | orchestrator | Tuesday 03 June 2025 15:34:32 +0000 (0:00:00.213) 0:01:51.352 ********** 2025-06-03 15:34:55.133421 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:34:55.133425 | orchestrator | 2025-06-03 15:34:55.133429 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-06-03 15:34:55.133433 | orchestrator | Tuesday 03 June 2025 15:34:38 +0000 (0:00:06.624) 0:01:57.977 ********** 2025-06-03 15:34:55.133437 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:34:55.133440 | orchestrator | 2025-06-03 15:34:55.133444 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2025-06-03 15:34:55.133448 | orchestrator | 2025-06-03 15:34:55.133451 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2025-06-03 15:34:55.133455 | orchestrator | Tuesday 03 June 2025 15:34:50 +0000 (0:00:11.575) 0:02:09.553 ********** 2025-06-03 15:34:55.133459 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:34:55.133463 | orchestrator | 2025-06-03 15:34:55.133466 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2025-06-03 15:34:55.133470 | orchestrator | Tuesday 03 June 2025 15:34:51 +0000 (0:00:00.702) 0:02:10.255 ********** 2025-06-03 15:34:55.133474 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-06-03 15:34:55.133478 | orchestrator | enable_outward_rabbitmq_True 2025-06-03 15:34:55.133482 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-06-03 15:34:55.133486 | orchestrator | outward_rabbitmq_restart 2025-06-03 15:34:55.133495 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:34:55.133499 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:34:55.133502 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:34:55.133506 | orchestrator | 2025-06-03 15:34:55.133510 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2025-06-03 15:34:55.133513 | orchestrator | skipping: no hosts matched 2025-06-03 15:34:55.133517 | orchestrator | 2025-06-03 15:34:55.133521 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2025-06-03 15:34:55.133582 | orchestrator | skipping: no hosts matched 2025-06-03 15:34:55.133597 | orchestrator | 2025-06-03 15:34:55.133601 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2025-06-03 15:34:55.133604 | orchestrator | skipping: no hosts matched 2025-06-03 15:34:55.133608 | orchestrator | 2025-06-03 15:34:55.133612 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-03 15:34:55.133616 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-06-03 15:34:55.133620 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-06-03 15:34:55.133624 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-03 15:34:55.133628 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-03 15:34:55.133631 | orchestrator | 2025-06-03 15:34:55.133635 | orchestrator | 2025-06-03 15:34:55.133639 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-03 15:34:55.133643 | orchestrator | Tuesday 03 June 2025 15:34:53 +0000 (0:00:02.247) 0:02:12.503 ********** 2025-06-03 15:34:55.133647 | orchestrator | =============================================================================== 2025-06-03 15:34:55.133651 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 80.70s 2025-06-03 15:34:55.133672 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 11.45s 2025-06-03 15:34:55.133677 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 6.65s 2025-06-03 15:34:55.133681 | orchestrator | Check RabbitMQ service -------------------------------------------------- 3.82s 2025-06-03 15:34:55.133685 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 2.45s 2025-06-03 15:34:55.133688 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.25s 2025-06-03 15:34:55.133726 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 1.99s 2025-06-03 15:34:55.133730 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 1.93s 2025-06-03 15:34:55.133734 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.91s 2025-06-03 15:34:55.133738 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 1.71s 2025-06-03 15:34:55.133742 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.69s 2025-06-03 15:34:55.133746 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 1.62s 2025-06-03 15:34:55.133750 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.42s 2025-06-03 15:34:55.133754 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.32s 2025-06-03 15:34:55.133760 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 1.28s 2025-06-03 15:34:55.133764 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.03s 2025-06-03 15:34:55.133768 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 0.99s 2025-06-03 15:34:55.133776 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 0.87s 2025-06-03 15:34:55.133780 | orchestrator | rabbitmq : Put RabbitMQ node into maintenance mode ---------------------- 0.85s 2025-06-03 15:34:55.133789 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 0.80s 2025-06-03 15:34:55.133793 | orchestrator | 2025-06-03 15:34:55 | INFO  | Task b2488e9b-0278-4b24-bce6-4fbe674b9626 is in state SUCCESS 2025-06-03 15:34:55.133797 | orchestrator | 2025-06-03 15:34:55 | INFO  | Task 874e5012-af84-430b-ad3a-db8ab497054f is in state STARTED 2025-06-03 15:34:55.133801 | orchestrator | 2025-06-03 15:34:55 | INFO  | Task 2b42b3aa-4396-4a89-a666-075c05fdd1c6 is in state STARTED 2025-06-03 15:34:55.133805 | orchestrator | 2025-06-03 15:34:55 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:34:58.162388 | orchestrator | 2025-06-03 15:34:58 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:34:58.162900 | orchestrator | 2025-06-03 15:34:58 | INFO  | Task 874e5012-af84-430b-ad3a-db8ab497054f is in state STARTED 2025-06-03 15:34:58.163752 | orchestrator | 2025-06-03 15:34:58 | INFO  | Task 2b42b3aa-4396-4a89-a666-075c05fdd1c6 is in state STARTED 2025-06-03 15:34:58.163919 | orchestrator | 2025-06-03 15:34:58 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:35:01.212786 | orchestrator | 2025-06-03 15:35:01 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:35:01.215024 | orchestrator | 2025-06-03 15:35:01 | INFO  | Task 874e5012-af84-430b-ad3a-db8ab497054f is in state STARTED 2025-06-03 15:35:01.216029 | orchestrator | 2025-06-03 15:35:01 | INFO  | Task 2b42b3aa-4396-4a89-a666-075c05fdd1c6 is in state STARTED 2025-06-03 15:35:01.217011 | orchestrator | 2025-06-03 15:35:01 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:35:04.268021 | orchestrator | 2025-06-03 15:35:04 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:35:04.269086 | orchestrator | 2025-06-03 15:35:04 | INFO  | Task 874e5012-af84-430b-ad3a-db8ab497054f is in state STARTED 2025-06-03 15:35:04.269990 | orchestrator | 2025-06-03 15:35:04 | INFO  | Task 2b42b3aa-4396-4a89-a666-075c05fdd1c6 is in state STARTED 2025-06-03 15:35:04.270064 | orchestrator | 2025-06-03 15:35:04 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:35:07.329236 | orchestrator | 2025-06-03 15:35:07 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:35:07.332384 | orchestrator | 2025-06-03 15:35:07 | INFO  | Task 874e5012-af84-430b-ad3a-db8ab497054f is in state STARTED 2025-06-03 15:35:07.334117 | orchestrator | 2025-06-03 15:35:07 | INFO  | Task 2b42b3aa-4396-4a89-a666-075c05fdd1c6 is in state STARTED 2025-06-03 15:35:07.334175 | orchestrator | 2025-06-03 15:35:07 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:35:10.369275 | orchestrator | 2025-06-03 15:35:10 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:35:10.370061 | orchestrator | 2025-06-03 15:35:10 | INFO  | Task 874e5012-af84-430b-ad3a-db8ab497054f is in state STARTED 2025-06-03 15:35:10.371739 | orchestrator | 2025-06-03 15:35:10 | INFO  | Task 2b42b3aa-4396-4a89-a666-075c05fdd1c6 is in state STARTED 2025-06-03 15:35:10.371775 | orchestrator | 2025-06-03 15:35:10 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:35:13.403956 | orchestrator | 2025-06-03 15:35:13 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:35:13.404293 | orchestrator | 2025-06-03 15:35:13 | INFO  | Task 874e5012-af84-430b-ad3a-db8ab497054f is in state STARTED 2025-06-03 15:35:13.405684 | orchestrator | 2025-06-03 15:35:13 | INFO  | Task 2b42b3aa-4396-4a89-a666-075c05fdd1c6 is in state STARTED 2025-06-03 15:35:13.405930 | orchestrator | 2025-06-03 15:35:13 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:35:16.447943 | orchestrator | 2025-06-03 15:35:16 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:35:16.448124 | orchestrator | 2025-06-03 15:35:16 | INFO  | Task 874e5012-af84-430b-ad3a-db8ab497054f is in state STARTED 2025-06-03 15:35:16.448222 | orchestrator | 2025-06-03 15:35:16 | INFO  | Task 2b42b3aa-4396-4a89-a666-075c05fdd1c6 is in state STARTED 2025-06-03 15:35:16.448286 | orchestrator | 2025-06-03 15:35:16 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:35:19.501204 | orchestrator | 2025-06-03 15:35:19 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:35:19.501609 | orchestrator | 2025-06-03 15:35:19 | INFO  | Task 874e5012-af84-430b-ad3a-db8ab497054f is in state STARTED 2025-06-03 15:35:19.502457 | orchestrator | 2025-06-03 15:35:19 | INFO  | Task 2b42b3aa-4396-4a89-a666-075c05fdd1c6 is in state STARTED 2025-06-03 15:35:19.502857 | orchestrator | 2025-06-03 15:35:19 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:35:22.548886 | orchestrator | 2025-06-03 15:35:22 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:35:22.551045 | orchestrator | 2025-06-03 15:35:22 | INFO  | Task 874e5012-af84-430b-ad3a-db8ab497054f is in state STARTED 2025-06-03 15:35:22.553067 | orchestrator | 2025-06-03 15:35:22 | INFO  | Task 2b42b3aa-4396-4a89-a666-075c05fdd1c6 is in state STARTED 2025-06-03 15:35:22.553164 | orchestrator | 2025-06-03 15:35:22 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:35:25.583518 | orchestrator | 2025-06-03 15:35:25 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:35:25.583644 | orchestrator | 2025-06-03 15:35:25 | INFO  | Task 874e5012-af84-430b-ad3a-db8ab497054f is in state STARTED 2025-06-03 15:35:25.584760 | orchestrator | 2025-06-03 15:35:25 | INFO  | Task 2b42b3aa-4396-4a89-a666-075c05fdd1c6 is in state STARTED 2025-06-03 15:35:25.584796 | orchestrator | 2025-06-03 15:35:25 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:35:28.625720 | orchestrator | 2025-06-03 15:35:28 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:35:28.626979 | orchestrator | 2025-06-03 15:35:28 | INFO  | Task 874e5012-af84-430b-ad3a-db8ab497054f is in state STARTED 2025-06-03 15:35:28.628932 | orchestrator | 2025-06-03 15:35:28 | INFO  | Task 2b42b3aa-4396-4a89-a666-075c05fdd1c6 is in state STARTED 2025-06-03 15:35:28.628972 | orchestrator | 2025-06-03 15:35:28 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:35:31.652760 | orchestrator | 2025-06-03 15:35:31 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:35:31.652832 | orchestrator | 2025-06-03 15:35:31 | INFO  | Task 874e5012-af84-430b-ad3a-db8ab497054f is in state STARTED 2025-06-03 15:35:31.652838 | orchestrator | 2025-06-03 15:35:31 | INFO  | Task 2b42b3aa-4396-4a89-a666-075c05fdd1c6 is in state STARTED 2025-06-03 15:35:31.653836 | orchestrator | 2025-06-03 15:35:31 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:35:34.685823 | orchestrator | 2025-06-03 15:35:34 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:35:34.685929 | orchestrator | 2025-06-03 15:35:34 | INFO  | Task 874e5012-af84-430b-ad3a-db8ab497054f is in state STARTED 2025-06-03 15:35:34.686415 | orchestrator | 2025-06-03 15:35:34 | INFO  | Task 2b42b3aa-4396-4a89-a666-075c05fdd1c6 is in state STARTED 2025-06-03 15:35:34.686440 | orchestrator | 2025-06-03 15:35:34 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:35:37.722077 | orchestrator | 2025-06-03 15:35:37 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:35:37.722555 | orchestrator | 2025-06-03 15:35:37 | INFO  | Task 874e5012-af84-430b-ad3a-db8ab497054f is in state STARTED 2025-06-03 15:35:37.724241 | orchestrator | 2025-06-03 15:35:37 | INFO  | Task 2b42b3aa-4396-4a89-a666-075c05fdd1c6 is in state STARTED 2025-06-03 15:35:37.724352 | orchestrator | 2025-06-03 15:35:37 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:35:40.773529 | orchestrator | 2025-06-03 15:35:40 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:35:40.777226 | orchestrator | 2025-06-03 15:35:40 | INFO  | Task 874e5012-af84-430b-ad3a-db8ab497054f is in state STARTED 2025-06-03 15:35:40.778442 | orchestrator | 2025-06-03 15:35:40 | INFO  | Task 2b42b3aa-4396-4a89-a666-075c05fdd1c6 is in state STARTED 2025-06-03 15:35:40.778474 | orchestrator | 2025-06-03 15:35:40 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:35:43.815999 | orchestrator | 2025-06-03 15:35:43 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:35:43.816835 | orchestrator | 2025-06-03 15:35:43 | INFO  | Task 874e5012-af84-430b-ad3a-db8ab497054f is in state STARTED 2025-06-03 15:35:43.816872 | orchestrator | 2025-06-03 15:35:43 | INFO  | Task 2b42b3aa-4396-4a89-a666-075c05fdd1c6 is in state STARTED 2025-06-03 15:35:43.816895 | orchestrator | 2025-06-03 15:35:43 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:35:46.859965 | orchestrator | 2025-06-03 15:35:46 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:35:46.860363 | orchestrator | 2025-06-03 15:35:46 | INFO  | Task 874e5012-af84-430b-ad3a-db8ab497054f is in state STARTED 2025-06-03 15:35:46.862771 | orchestrator | 2025-06-03 15:35:46 | INFO  | Task 2b42b3aa-4396-4a89-a666-075c05fdd1c6 is in state STARTED 2025-06-03 15:35:46.862822 | orchestrator | 2025-06-03 15:35:46 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:35:49.907641 | orchestrator | 2025-06-03 15:35:49 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:35:49.907864 | orchestrator | 2025-06-03 15:35:49 | INFO  | Task 874e5012-af84-430b-ad3a-db8ab497054f is in state STARTED 2025-06-03 15:35:49.908534 | orchestrator | 2025-06-03 15:35:49 | INFO  | Task 2b42b3aa-4396-4a89-a666-075c05fdd1c6 is in state STARTED 2025-06-03 15:35:49.908565 | orchestrator | 2025-06-03 15:35:49 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:35:52.947517 | orchestrator | 2025-06-03 15:35:52 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:35:52.949869 | orchestrator | 2025-06-03 15:35:52 | INFO  | Task 874e5012-af84-430b-ad3a-db8ab497054f is in state STARTED 2025-06-03 15:35:52.951456 | orchestrator | 2025-06-03 15:35:52 | INFO  | Task 2b42b3aa-4396-4a89-a666-075c05fdd1c6 is in state STARTED 2025-06-03 15:35:52.951526 | orchestrator | 2025-06-03 15:35:52 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:35:55.992638 | orchestrator | 2025-06-03 15:35:55 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:35:55.993988 | orchestrator | 2025-06-03 15:35:55 | INFO  | Task 874e5012-af84-430b-ad3a-db8ab497054f is in state STARTED 2025-06-03 15:35:55.995186 | orchestrator | 2025-06-03 15:35:55 | INFO  | Task 2b42b3aa-4396-4a89-a666-075c05fdd1c6 is in state STARTED 2025-06-03 15:35:55.995234 | orchestrator | 2025-06-03 15:35:55 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:35:59.047491 | orchestrator | 2025-06-03 15:35:59 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:35:59.048081 | orchestrator | 2025-06-03 15:35:59 | INFO  | Task 874e5012-af84-430b-ad3a-db8ab497054f is in state STARTED 2025-06-03 15:35:59.049778 | orchestrator | 2025-06-03 15:35:59 | INFO  | Task 2b42b3aa-4396-4a89-a666-075c05fdd1c6 is in state STARTED 2025-06-03 15:35:59.049818 | orchestrator | 2025-06-03 15:35:59 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:36:02.094496 | orchestrator | 2025-06-03 15:36:02 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:36:02.098350 | orchestrator | 2025-06-03 15:36:02 | INFO  | Task 874e5012-af84-430b-ad3a-db8ab497054f is in state STARTED 2025-06-03 15:36:02.099648 | orchestrator | 2025-06-03 15:36:02 | INFO  | Task 2b42b3aa-4396-4a89-a666-075c05fdd1c6 is in state STARTED 2025-06-03 15:36:02.099727 | orchestrator | 2025-06-03 15:36:02 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:36:05.147219 | orchestrator | 2025-06-03 15:36:05 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:36:05.148047 | orchestrator | 2025-06-03 15:36:05 | INFO  | Task 874e5012-af84-430b-ad3a-db8ab497054f is in state STARTED 2025-06-03 15:36:05.149413 | orchestrator | 2025-06-03 15:36:05 | INFO  | Task 2b42b3aa-4396-4a89-a666-075c05fdd1c6 is in state STARTED 2025-06-03 15:36:05.149445 | orchestrator | 2025-06-03 15:36:05 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:36:08.188654 | orchestrator | 2025-06-03 15:36:08 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:36:08.193937 | orchestrator | 2025-06-03 15:36:08 | INFO  | Task 874e5012-af84-430b-ad3a-db8ab497054f is in state STARTED 2025-06-03 15:36:08.194727 | orchestrator | 2025-06-03 15:36:08 | INFO  | Task 2b42b3aa-4396-4a89-a666-075c05fdd1c6 is in state STARTED 2025-06-03 15:36:08.194858 | orchestrator | 2025-06-03 15:36:08 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:36:11.232775 | orchestrator | 2025-06-03 15:36:11 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:36:11.237343 | orchestrator | 2025-06-03 15:36:11 | INFO  | Task 874e5012-af84-430b-ad3a-db8ab497054f is in state STARTED 2025-06-03 15:36:11.238255 | orchestrator | 2025-06-03 15:36:11 | INFO  | Task 2b42b3aa-4396-4a89-a666-075c05fdd1c6 is in state STARTED 2025-06-03 15:36:11.238962 | orchestrator | 2025-06-03 15:36:11 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:36:14.278051 | orchestrator | 2025-06-03 15:36:14 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:36:14.278206 | orchestrator | 2025-06-03 15:36:14 | INFO  | Task 874e5012-af84-430b-ad3a-db8ab497054f is in state STARTED 2025-06-03 15:36:14.278287 | orchestrator | 2025-06-03 15:36:14 | INFO  | Task 2b42b3aa-4396-4a89-a666-075c05fdd1c6 is in state STARTED 2025-06-03 15:36:14.278299 | orchestrator | 2025-06-03 15:36:14 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:36:17.313709 | orchestrator | 2025-06-03 15:36:17 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:36:17.314203 | orchestrator | 2025-06-03 15:36:17 | INFO  | Task 874e5012-af84-430b-ad3a-db8ab497054f is in state STARTED 2025-06-03 15:36:17.314636 | orchestrator | 2025-06-03 15:36:17 | INFO  | Task 2b42b3aa-4396-4a89-a666-075c05fdd1c6 is in state STARTED 2025-06-03 15:36:17.314657 | orchestrator | 2025-06-03 15:36:17 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:36:20.376795 | orchestrator | 2025-06-03 15:36:20 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:36:20.376866 | orchestrator | 2025-06-03 15:36:20 | INFO  | Task 874e5012-af84-430b-ad3a-db8ab497054f is in state STARTED 2025-06-03 15:36:20.377235 | orchestrator | 2025-06-03 15:36:20 | INFO  | Task 2b42b3aa-4396-4a89-a666-075c05fdd1c6 is in state STARTED 2025-06-03 15:36:20.377254 | orchestrator | 2025-06-03 15:36:20 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:36:23.416410 | orchestrator | 2025-06-03 15:36:23 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:36:23.417343 | orchestrator | 2025-06-03 15:36:23 | INFO  | Task 874e5012-af84-430b-ad3a-db8ab497054f is in state SUCCESS 2025-06-03 15:36:23.418975 | orchestrator | 2025-06-03 15:36:23.419034 | orchestrator | 2025-06-03 15:36:23.419042 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-03 15:36:23.419049 | orchestrator | 2025-06-03 15:36:23.419055 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-03 15:36:23.419060 | orchestrator | Tuesday 03 June 2025 15:33:29 +0000 (0:00:00.163) 0:00:00.163 ********** 2025-06-03 15:36:23.419066 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:36:23.419072 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:36:23.419077 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:36:23.419082 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:36:23.419087 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:36:23.419093 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:36:23.419098 | orchestrator | 2025-06-03 15:36:23.419103 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-03 15:36:23.419108 | orchestrator | Tuesday 03 June 2025 15:33:30 +0000 (0:00:00.622) 0:00:00.786 ********** 2025-06-03 15:36:23.419114 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2025-06-03 15:36:23.419119 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2025-06-03 15:36:23.419125 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2025-06-03 15:36:23.419130 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2025-06-03 15:36:23.419135 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2025-06-03 15:36:23.419140 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2025-06-03 15:36:23.419145 | orchestrator | 2025-06-03 15:36:23.419150 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2025-06-03 15:36:23.419155 | orchestrator | 2025-06-03 15:36:23.419161 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2025-06-03 15:36:23.419166 | orchestrator | Tuesday 03 June 2025 15:33:31 +0000 (0:00:00.835) 0:00:01.621 ********** 2025-06-03 15:36:23.419172 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:36:23.419179 | orchestrator | 2025-06-03 15:36:23.419184 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2025-06-03 15:36:23.419189 | orchestrator | Tuesday 03 June 2025 15:33:32 +0000 (0:00:01.066) 0:00:02.687 ********** 2025-06-03 15:36:23.419196 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:23.419205 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:23.419223 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:23.419246 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:23.419251 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:23.419257 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:23.419262 | orchestrator | 2025-06-03 15:36:23.419278 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2025-06-03 15:36:23.419283 | orchestrator | Tuesday 03 June 2025 15:33:33 +0000 (0:00:01.148) 0:00:03.836 ********** 2025-06-03 15:36:23.419289 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:23.419343 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:23.419350 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:23.419356 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:23.419361 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:23.419452 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:23.419459 | orchestrator | 2025-06-03 15:36:23.419464 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2025-06-03 15:36:23.419469 | orchestrator | Tuesday 03 June 2025 15:33:34 +0000 (0:00:01.388) 0:00:05.224 ********** 2025-06-03 15:36:23.419474 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:23.419480 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:23.419490 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:23.419496 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:23.419501 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:23.419507 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:23.419512 | orchestrator | 2025-06-03 15:36:23.419517 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2025-06-03 15:36:23.419522 | orchestrator | Tuesday 03 June 2025 15:33:36 +0000 (0:00:01.411) 0:00:06.635 ********** 2025-06-03 15:36:23.419527 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:23.419537 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:23.419546 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:23.419552 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:23.419558 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:23.419564 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:23.419571 | orchestrator | 2025-06-03 15:36:23.419596 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2025-06-03 15:36:23.419603 | orchestrator | Tuesday 03 June 2025 15:33:37 +0000 (0:00:01.635) 0:00:08.270 ********** 2025-06-03 15:36:23.419609 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:23.419616 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:23.419621 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:23.419632 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:23.419638 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:23.419683 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:23.419699 | orchestrator | 2025-06-03 15:36:23.419705 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2025-06-03 15:36:23.419710 | orchestrator | Tuesday 03 June 2025 15:33:39 +0000 (0:00:01.395) 0:00:09.665 ********** 2025-06-03 15:36:23.419716 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:36:23.419722 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:36:23.419729 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:36:23.419734 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:36:23.419740 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:36:23.419745 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:36:23.419751 | orchestrator | 2025-06-03 15:36:23.419757 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2025-06-03 15:36:23.419763 | orchestrator | Tuesday 03 June 2025 15:33:41 +0000 (0:00:02.571) 0:00:12.237 ********** 2025-06-03 15:36:23.419769 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2025-06-03 15:36:23.419776 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2025-06-03 15:36:23.419785 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2025-06-03 15:36:23.419794 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2025-06-03 15:36:23.419801 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2025-06-03 15:36:23.419808 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2025-06-03 15:36:23.419818 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-06-03 15:36:23.419827 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-06-03 15:36:23.419841 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-06-03 15:36:23.419851 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-06-03 15:36:23.419859 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-06-03 15:36:23.419866 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-06-03 15:36:23.419873 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-06-03 15:36:23.419880 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-06-03 15:36:23.419891 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-06-03 15:36:23.419897 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-06-03 15:36:23.419903 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-06-03 15:36:23.419909 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-06-03 15:36:23.419916 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-06-03 15:36:23.419923 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-06-03 15:36:23.419930 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-06-03 15:36:23.419936 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-06-03 15:36:23.419943 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-06-03 15:36:23.419948 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-06-03 15:36:23.419954 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-06-03 15:36:23.419960 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-06-03 15:36:23.419965 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-06-03 15:36:23.419971 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-06-03 15:36:23.419976 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-06-03 15:36:23.419981 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-06-03 15:36:23.419990 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-06-03 15:36:23.419995 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-06-03 15:36:23.420001 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-06-03 15:36:23.420006 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-06-03 15:36:23.420011 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-06-03 15:36:23.420016 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-06-03 15:36:23.420022 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-06-03 15:36:23.420027 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-06-03 15:36:23.420032 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-06-03 15:36:23.420037 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-06-03 15:36:23.420042 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-06-03 15:36:23.420047 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-06-03 15:36:23.420053 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2025-06-03 15:36:23.420063 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2025-06-03 15:36:23.420072 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2025-06-03 15:36:23.420077 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2025-06-03 15:36:23.420083 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2025-06-03 15:36:23.420088 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2025-06-03 15:36:23.420093 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-06-03 15:36:23.420099 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-06-03 15:36:23.420104 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-06-03 15:36:23.420109 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-06-03 15:36:23.420114 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-06-03 15:36:23.420119 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-06-03 15:36:23.420124 | orchestrator | 2025-06-03 15:36:23.420130 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-06-03 15:36:23.420135 | orchestrator | Tuesday 03 June 2025 15:34:01 +0000 (0:00:19.412) 0:00:31.649 ********** 2025-06-03 15:36:23.420140 | orchestrator | 2025-06-03 15:36:23.420146 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-06-03 15:36:23.420151 | orchestrator | Tuesday 03 June 2025 15:34:01 +0000 (0:00:00.064) 0:00:31.714 ********** 2025-06-03 15:36:23.420156 | orchestrator | 2025-06-03 15:36:23.420162 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-06-03 15:36:23.420167 | orchestrator | Tuesday 03 June 2025 15:34:01 +0000 (0:00:00.083) 0:00:31.797 ********** 2025-06-03 15:36:23.420172 | orchestrator | 2025-06-03 15:36:23.420177 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-06-03 15:36:23.420183 | orchestrator | Tuesday 03 June 2025 15:34:01 +0000 (0:00:00.066) 0:00:31.864 ********** 2025-06-03 15:36:23.420188 | orchestrator | 2025-06-03 15:36:23.420193 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-06-03 15:36:23.420198 | orchestrator | Tuesday 03 June 2025 15:34:01 +0000 (0:00:00.064) 0:00:31.928 ********** 2025-06-03 15:36:23.420204 | orchestrator | 2025-06-03 15:36:23.420209 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-06-03 15:36:23.420214 | orchestrator | Tuesday 03 June 2025 15:34:01 +0000 (0:00:00.072) 0:00:32.000 ********** 2025-06-03 15:36:23.420220 | orchestrator | 2025-06-03 15:36:23.420225 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2025-06-03 15:36:23.420230 | orchestrator | Tuesday 03 June 2025 15:34:01 +0000 (0:00:00.088) 0:00:32.088 ********** 2025-06-03 15:36:23.420235 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:36:23.420241 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:36:23.420246 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:36:23.420255 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:36:23.420261 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:36:23.420266 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:36:23.420271 | orchestrator | 2025-06-03 15:36:23.420276 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2025-06-03 15:36:23.420282 | orchestrator | Tuesday 03 June 2025 15:34:03 +0000 (0:00:02.294) 0:00:34.383 ********** 2025-06-03 15:36:23.420290 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:36:23.420295 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:36:23.420300 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:36:23.420305 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:36:23.420310 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:36:23.420316 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:36:23.420321 | orchestrator | 2025-06-03 15:36:23.420326 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2025-06-03 15:36:23.420331 | orchestrator | 2025-06-03 15:36:23.420336 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-06-03 15:36:23.420342 | orchestrator | Tuesday 03 June 2025 15:35:08 +0000 (0:01:04.229) 0:01:38.613 ********** 2025-06-03 15:36:23.420347 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:36:23.420352 | orchestrator | 2025-06-03 15:36:23.420358 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-06-03 15:36:23.420363 | orchestrator | Tuesday 03 June 2025 15:35:08 +0000 (0:00:00.537) 0:01:39.150 ********** 2025-06-03 15:36:23.420369 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:36:23.420375 | orchestrator | 2025-06-03 15:36:23.420381 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2025-06-03 15:36:23.420386 | orchestrator | Tuesday 03 June 2025 15:35:09 +0000 (0:00:00.757) 0:01:39.907 ********** 2025-06-03 15:36:23.420391 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:36:23.420397 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:36:23.420402 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:36:23.420408 | orchestrator | 2025-06-03 15:36:23.420413 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2025-06-03 15:36:23.420418 | orchestrator | Tuesday 03 June 2025 15:35:10 +0000 (0:00:00.825) 0:01:40.733 ********** 2025-06-03 15:36:23.420424 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:36:23.420430 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:36:23.420436 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:36:23.420444 | orchestrator | 2025-06-03 15:36:23.420450 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2025-06-03 15:36:23.420455 | orchestrator | Tuesday 03 June 2025 15:35:10 +0000 (0:00:00.326) 0:01:41.060 ********** 2025-06-03 15:36:23.420461 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:36:23.420467 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:36:23.420472 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:36:23.420477 | orchestrator | 2025-06-03 15:36:23.420482 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2025-06-03 15:36:23.420488 | orchestrator | Tuesday 03 June 2025 15:35:10 +0000 (0:00:00.326) 0:01:41.387 ********** 2025-06-03 15:36:23.420493 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:36:23.420499 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:36:23.420504 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:36:23.420510 | orchestrator | 2025-06-03 15:36:23.420515 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2025-06-03 15:36:23.420521 | orchestrator | Tuesday 03 June 2025 15:35:11 +0000 (0:00:00.554) 0:01:41.941 ********** 2025-06-03 15:36:23.420526 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:36:23.420532 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:36:23.420537 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:36:23.420542 | orchestrator | 2025-06-03 15:36:23.420548 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2025-06-03 15:36:23.420553 | orchestrator | Tuesday 03 June 2025 15:35:11 +0000 (0:00:00.338) 0:01:42.280 ********** 2025-06-03 15:36:23.420559 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:36:23.420564 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:36:23.420570 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:36:23.420575 | orchestrator | 2025-06-03 15:36:23.420581 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2025-06-03 15:36:23.420590 | orchestrator | Tuesday 03 June 2025 15:35:12 +0000 (0:00:00.363) 0:01:42.644 ********** 2025-06-03 15:36:23.420595 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:36:23.420601 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:36:23.420606 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:36:23.420612 | orchestrator | 2025-06-03 15:36:23.420617 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2025-06-03 15:36:23.420622 | orchestrator | Tuesday 03 June 2025 15:35:12 +0000 (0:00:00.304) 0:01:42.948 ********** 2025-06-03 15:36:23.420628 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:36:23.420633 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:36:23.420638 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:36:23.420644 | orchestrator | 2025-06-03 15:36:23.420649 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2025-06-03 15:36:23.420655 | orchestrator | Tuesday 03 June 2025 15:35:13 +0000 (0:00:00.531) 0:01:43.480 ********** 2025-06-03 15:36:23.420661 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:36:23.420683 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:36:23.420688 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:36:23.420693 | orchestrator | 2025-06-03 15:36:23.420699 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2025-06-03 15:36:23.420704 | orchestrator | Tuesday 03 June 2025 15:35:13 +0000 (0:00:00.303) 0:01:43.784 ********** 2025-06-03 15:36:23.420710 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:36:23.420715 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:36:23.420721 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:36:23.420727 | orchestrator | 2025-06-03 15:36:23.420732 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2025-06-03 15:36:23.420737 | orchestrator | Tuesday 03 June 2025 15:35:13 +0000 (0:00:00.310) 0:01:44.095 ********** 2025-06-03 15:36:23.420743 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:36:23.420749 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:36:23.420754 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:36:23.420759 | orchestrator | 2025-06-03 15:36:23.420772 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2025-06-03 15:36:23.420778 | orchestrator | Tuesday 03 June 2025 15:35:13 +0000 (0:00:00.314) 0:01:44.409 ********** 2025-06-03 15:36:23.420784 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:36:23.420790 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:36:23.420795 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:36:23.420800 | orchestrator | 2025-06-03 15:36:23.420806 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2025-06-03 15:36:23.420812 | orchestrator | Tuesday 03 June 2025 15:35:14 +0000 (0:00:00.520) 0:01:44.929 ********** 2025-06-03 15:36:23.420818 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:36:23.420824 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:36:23.420829 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:36:23.420837 | orchestrator | 2025-06-03 15:36:23.420844 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2025-06-03 15:36:23.420852 | orchestrator | Tuesday 03 June 2025 15:35:14 +0000 (0:00:00.305) 0:01:45.235 ********** 2025-06-03 15:36:23.420860 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:36:23.420869 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:36:23.420877 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:36:23.420890 | orchestrator | 2025-06-03 15:36:23.420902 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2025-06-03 15:36:23.420910 | orchestrator | Tuesday 03 June 2025 15:35:15 +0000 (0:00:00.287) 0:01:45.523 ********** 2025-06-03 15:36:23.420919 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:36:23.420927 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:36:23.420935 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:36:23.420944 | orchestrator | 2025-06-03 15:36:23.420952 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2025-06-03 15:36:23.420967 | orchestrator | Tuesday 03 June 2025 15:35:15 +0000 (0:00:00.315) 0:01:45.838 ********** 2025-06-03 15:36:23.420975 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:36:23.420984 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:36:23.420993 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:36:23.421001 | orchestrator | 2025-06-03 15:36:23.421010 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2025-06-03 15:36:23.421020 | orchestrator | Tuesday 03 June 2025 15:35:15 +0000 (0:00:00.566) 0:01:46.405 ********** 2025-06-03 15:36:23.421026 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:36:23.421031 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:36:23.421043 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:36:23.421048 | orchestrator | 2025-06-03 15:36:23.421054 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-06-03 15:36:23.421059 | orchestrator | Tuesday 03 June 2025 15:35:16 +0000 (0:00:00.393) 0:01:46.798 ********** 2025-06-03 15:36:23.421065 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:36:23.421070 | orchestrator | 2025-06-03 15:36:23.421076 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2025-06-03 15:36:23.421081 | orchestrator | Tuesday 03 June 2025 15:35:17 +0000 (0:00:00.710) 0:01:47.509 ********** 2025-06-03 15:36:23.421086 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:36:23.421092 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:36:23.421098 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:36:23.421103 | orchestrator | 2025-06-03 15:36:23.421109 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2025-06-03 15:36:23.421114 | orchestrator | Tuesday 03 June 2025 15:35:18 +0000 (0:00:01.177) 0:01:48.687 ********** 2025-06-03 15:36:23.421119 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:36:23.421125 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:36:23.421131 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:36:23.421137 | orchestrator | 2025-06-03 15:36:23.421142 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2025-06-03 15:36:23.421148 | orchestrator | Tuesday 03 June 2025 15:35:18 +0000 (0:00:00.452) 0:01:49.139 ********** 2025-06-03 15:36:23.421153 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:36:23.421159 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:36:23.421164 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:36:23.421169 | orchestrator | 2025-06-03 15:36:23.421175 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2025-06-03 15:36:23.421180 | orchestrator | Tuesday 03 June 2025 15:35:19 +0000 (0:00:00.442) 0:01:49.582 ********** 2025-06-03 15:36:23.421186 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:36:23.421192 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:36:23.421197 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:36:23.421202 | orchestrator | 2025-06-03 15:36:23.421208 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2025-06-03 15:36:23.421213 | orchestrator | Tuesday 03 June 2025 15:35:19 +0000 (0:00:00.366) 0:01:49.949 ********** 2025-06-03 15:36:23.421218 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:36:23.421224 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:36:23.421230 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:36:23.421235 | orchestrator | 2025-06-03 15:36:23.421240 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2025-06-03 15:36:23.421246 | orchestrator | Tuesday 03 June 2025 15:35:19 +0000 (0:00:00.392) 0:01:50.341 ********** 2025-06-03 15:36:23.421251 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:36:23.421257 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:36:23.421262 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:36:23.421268 | orchestrator | 2025-06-03 15:36:23.421273 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2025-06-03 15:36:23.421279 | orchestrator | Tuesday 03 June 2025 15:35:20 +0000 (0:00:00.723) 0:01:51.065 ********** 2025-06-03 15:36:23.421290 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:36:23.421295 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:36:23.421301 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:36:23.421308 | orchestrator | 2025-06-03 15:36:23.421316 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2025-06-03 15:36:23.421325 | orchestrator | Tuesday 03 June 2025 15:35:20 +0000 (0:00:00.328) 0:01:51.394 ********** 2025-06-03 15:36:23.421338 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:36:23.421347 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:36:23.421364 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:36:23.421373 | orchestrator | 2025-06-03 15:36:23.421381 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-06-03 15:36:23.421390 | orchestrator | Tuesday 03 June 2025 15:35:21 +0000 (0:00:00.305) 0:01:51.699 ********** 2025-06-03 15:36:23.421399 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:23.421416 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:23.421425 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:23.421725 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:23.421744 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:23.421750 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:23.421757 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:23.421763 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:23.421777 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:23.421783 | orchestrator | 2025-06-03 15:36:23.421789 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-06-03 15:36:23.421795 | orchestrator | Tuesday 03 June 2025 15:35:22 +0000 (0:00:01.579) 0:01:53.279 ********** 2025-06-03 15:36:23.421805 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:23.421812 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:23.421818 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:23.421823 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:23.421833 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:23.421839 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:23.421845 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:23.421851 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:23.421862 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:23.421867 | orchestrator | 2025-06-03 15:36:23.421873 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-06-03 15:36:23.421878 | orchestrator | Tuesday 03 June 2025 15:35:26 +0000 (0:00:03.898) 0:01:57.178 ********** 2025-06-03 15:36:23.421884 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:23.421893 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:23.421899 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:23.421905 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:23.421911 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:23.421920 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:23.421926 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:23.421932 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:23.421942 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:23.421948 | orchestrator | 2025-06-03 15:36:23.421953 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-06-03 15:36:23.421959 | orchestrator | Tuesday 03 June 2025 15:35:28 +0000 (0:00:02.126) 0:01:59.304 ********** 2025-06-03 15:36:23.421965 | orchestrator | 2025-06-03 15:36:23.421971 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-06-03 15:36:23.421976 | orchestrator | Tuesday 03 June 2025 15:35:28 +0000 (0:00:00.071) 0:01:59.376 ********** 2025-06-03 15:36:23.421982 | orchestrator | 2025-06-03 15:36:23.421987 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-06-03 15:36:23.421993 | orchestrator | Tuesday 03 June 2025 15:35:28 +0000 (0:00:00.069) 0:01:59.445 ********** 2025-06-03 15:36:23.421998 | orchestrator | 2025-06-03 15:36:23.422004 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-06-03 15:36:23.422009 | orchestrator | Tuesday 03 June 2025 15:35:29 +0000 (0:00:00.068) 0:01:59.514 ********** 2025-06-03 15:36:23.422060 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:36:23.422066 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:36:23.422071 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:36:23.422077 | orchestrator | 2025-06-03 15:36:23.422082 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-06-03 15:36:23.422088 | orchestrator | Tuesday 03 June 2025 15:35:31 +0000 (0:00:02.533) 0:02:02.047 ********** 2025-06-03 15:36:23.422093 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:36:23.422098 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:36:23.422104 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:36:23.422109 | orchestrator | 2025-06-03 15:36:23.422115 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-06-03 15:36:23.422121 | orchestrator | Tuesday 03 June 2025 15:35:34 +0000 (0:00:02.996) 0:02:05.044 ********** 2025-06-03 15:36:23.422126 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:36:23.422132 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:36:23.422137 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:36:23.422143 | orchestrator | 2025-06-03 15:36:23.422149 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-06-03 15:36:23.422154 | orchestrator | Tuesday 03 June 2025 15:35:41 +0000 (0:00:06.871) 0:02:11.915 ********** 2025-06-03 15:36:23.422159 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:36:23.422165 | orchestrator | 2025-06-03 15:36:23.422170 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-06-03 15:36:23.422176 | orchestrator | Tuesday 03 June 2025 15:35:41 +0000 (0:00:00.122) 0:02:12.038 ********** 2025-06-03 15:36:23.422185 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:36:23.422229 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:36:23.422266 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:36:23.422275 | orchestrator | 2025-06-03 15:36:23.422282 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-06-03 15:36:23.422290 | orchestrator | Tuesday 03 June 2025 15:35:42 +0000 (0:00:00.731) 0:02:12.769 ********** 2025-06-03 15:36:23.422299 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:36:23.422307 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:36:23.422320 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:36:23.422328 | orchestrator | 2025-06-03 15:36:23.422338 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-06-03 15:36:23.422347 | orchestrator | Tuesday 03 June 2025 15:35:43 +0000 (0:00:00.855) 0:02:13.625 ********** 2025-06-03 15:36:23.422357 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:36:23.422375 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:36:23.422384 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:36:23.422393 | orchestrator | 2025-06-03 15:36:23.422403 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-06-03 15:36:23.422412 | orchestrator | Tuesday 03 June 2025 15:35:43 +0000 (0:00:00.836) 0:02:14.461 ********** 2025-06-03 15:36:23.422421 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:36:23.422430 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:36:23.422439 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:36:23.422448 | orchestrator | 2025-06-03 15:36:23.422457 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-06-03 15:36:23.422465 | orchestrator | Tuesday 03 June 2025 15:35:44 +0000 (0:00:00.615) 0:02:15.077 ********** 2025-06-03 15:36:23.422474 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:36:23.422483 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:36:23.422500 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:36:23.422509 | orchestrator | 2025-06-03 15:36:23.422517 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-06-03 15:36:23.422526 | orchestrator | Tuesday 03 June 2025 15:35:45 +0000 (0:00:00.785) 0:02:15.863 ********** 2025-06-03 15:36:23.422534 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:36:23.422543 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:36:23.422552 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:36:23.422561 | orchestrator | 2025-06-03 15:36:23.422570 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2025-06-03 15:36:23.422579 | orchestrator | Tuesday 03 June 2025 15:35:46 +0000 (0:00:01.267) 0:02:17.130 ********** 2025-06-03 15:36:23.422588 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:36:23.422597 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:36:23.422607 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:36:23.422617 | orchestrator | 2025-06-03 15:36:23.422627 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-06-03 15:36:23.422637 | orchestrator | Tuesday 03 June 2025 15:35:46 +0000 (0:00:00.317) 0:02:17.447 ********** 2025-06-03 15:36:23.422647 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:23.422657 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:23.422693 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:23.422703 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:23.422720 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:23.422738 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:23.422749 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:23.422760 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:23.422777 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:23.422787 | orchestrator | 2025-06-03 15:36:23.422796 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-06-03 15:36:23.422806 | orchestrator | Tuesday 03 June 2025 15:35:48 +0000 (0:00:01.614) 0:02:19.062 ********** 2025-06-03 15:36:23.422816 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:23.422826 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:23.422837 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:23.422847 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:23.422857 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:23.422878 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:23.422891 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:23.422901 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:23.422911 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:23.422921 | orchestrator | 2025-06-03 15:36:23.422931 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-06-03 15:36:23.422941 | orchestrator | Tuesday 03 June 2025 15:35:53 +0000 (0:00:04.508) 0:02:23.570 ********** 2025-06-03 15:36:23.422958 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:23.422968 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:23.422978 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:23.422988 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:23.422998 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:23.423009 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:23.423036 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:23.423048 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:23.423058 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:36:23.423068 | orchestrator | 2025-06-03 15:36:23.423077 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-06-03 15:36:23.423087 | orchestrator | Tuesday 03 June 2025 15:35:56 +0000 (0:00:03.348) 0:02:26.919 ********** 2025-06-03 15:36:23.423098 | orchestrator | 2025-06-03 15:36:23.423108 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-06-03 15:36:23.423120 | orchestrator | Tuesday 03 June 2025 15:35:56 +0000 (0:00:00.067) 0:02:26.987 ********** 2025-06-03 15:36:23.423130 | orchestrator | 2025-06-03 15:36:23.423139 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-06-03 15:36:23.423149 | orchestrator | Tuesday 03 June 2025 15:35:56 +0000 (0:00:00.066) 0:02:27.053 ********** 2025-06-03 15:36:23.423159 | orchestrator | 2025-06-03 15:36:23.423169 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-06-03 15:36:23.423179 | orchestrator | Tuesday 03 June 2025 15:35:56 +0000 (0:00:00.067) 0:02:27.120 ********** 2025-06-03 15:36:23.423188 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:36:23.423198 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:36:23.423208 | orchestrator | 2025-06-03 15:36:23.423224 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-06-03 15:36:23.423234 | orchestrator | Tuesday 03 June 2025 15:36:02 +0000 (0:00:06.187) 0:02:33.308 ********** 2025-06-03 15:36:23.423244 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:36:23.423254 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:36:23.423264 | orchestrator | 2025-06-03 15:36:23.423274 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-06-03 15:36:23.423284 | orchestrator | Tuesday 03 June 2025 15:36:09 +0000 (0:00:06.495) 0:02:39.803 ********** 2025-06-03 15:36:23.423295 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:36:23.423305 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:36:23.423316 | orchestrator | 2025-06-03 15:36:23.423325 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-06-03 15:36:23.423335 | orchestrator | Tuesday 03 June 2025 15:36:15 +0000 (0:00:06.146) 0:02:45.950 ********** 2025-06-03 15:36:23.423345 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:36:23.423356 | orchestrator | 2025-06-03 15:36:23.423366 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-06-03 15:36:23.423377 | orchestrator | Tuesday 03 June 2025 15:36:15 +0000 (0:00:00.124) 0:02:46.074 ********** 2025-06-03 15:36:23.423394 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:36:23.423405 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:36:23.423415 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:36:23.423425 | orchestrator | 2025-06-03 15:36:23.423435 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-06-03 15:36:23.423444 | orchestrator | Tuesday 03 June 2025 15:36:16 +0000 (0:00:00.992) 0:02:47.067 ********** 2025-06-03 15:36:23.423455 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:36:23.423465 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:36:23.423475 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:36:23.423485 | orchestrator | 2025-06-03 15:36:23.423495 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-06-03 15:36:23.423505 | orchestrator | Tuesday 03 June 2025 15:36:17 +0000 (0:00:00.666) 0:02:47.734 ********** 2025-06-03 15:36:23.423515 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:36:23.423525 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:36:23.423535 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:36:23.423546 | orchestrator | 2025-06-03 15:36:23.423556 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-06-03 15:36:23.423565 | orchestrator | Tuesday 03 June 2025 15:36:18 +0000 (0:00:00.761) 0:02:48.495 ********** 2025-06-03 15:36:23.423575 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:36:23.423585 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:36:23.423596 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:36:23.423606 | orchestrator | 2025-06-03 15:36:23.423615 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-06-03 15:36:23.423625 | orchestrator | Tuesday 03 June 2025 15:36:18 +0000 (0:00:00.581) 0:02:49.076 ********** 2025-06-03 15:36:23.423635 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:36:23.423645 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:36:23.423655 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:36:23.423691 | orchestrator | 2025-06-03 15:36:23.423702 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-06-03 15:36:23.423712 | orchestrator | Tuesday 03 June 2025 15:36:19 +0000 (0:00:01.255) 0:02:50.331 ********** 2025-06-03 15:36:23.423722 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:36:23.423732 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:36:23.423743 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:36:23.423753 | orchestrator | 2025-06-03 15:36:23.423763 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-03 15:36:23.423779 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-06-03 15:36:23.423790 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-06-03 15:36:23.423800 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-06-03 15:36:23.423810 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-03 15:36:23.423820 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-03 15:36:23.423831 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-03 15:36:23.423841 | orchestrator | 2025-06-03 15:36:23.423851 | orchestrator | 2025-06-03 15:36:23.423862 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-03 15:36:23.423872 | orchestrator | Tuesday 03 June 2025 15:36:20 +0000 (0:00:01.004) 0:02:51.336 ********** 2025-06-03 15:36:23.423882 | orchestrator | =============================================================================== 2025-06-03 15:36:23.423892 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 64.23s 2025-06-03 15:36:23.423910 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 19.41s 2025-06-03 15:36:23.423919 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 13.02s 2025-06-03 15:36:23.423929 | orchestrator | ovn-db : Restart ovn-sb-db container ------------------------------------ 9.49s 2025-06-03 15:36:23.423940 | orchestrator | ovn-db : Restart ovn-nb-db container ------------------------------------ 8.72s 2025-06-03 15:36:23.423950 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.51s 2025-06-03 15:36:23.423961 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 3.90s 2025-06-03 15:36:23.423975 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 3.35s 2025-06-03 15:36:23.423987 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.57s 2025-06-03 15:36:23.423996 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 2.29s 2025-06-03 15:36:23.424006 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.13s 2025-06-03 15:36:23.424015 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 1.64s 2025-06-03 15:36:23.424025 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.61s 2025-06-03 15:36:23.424035 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.58s 2025-06-03 15:36:23.424045 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.41s 2025-06-03 15:36:23.424054 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.40s 2025-06-03 15:36:23.424063 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 1.39s 2025-06-03 15:36:23.424074 | orchestrator | ovn-db : Wait for ovn-sb-db --------------------------------------------- 1.27s 2025-06-03 15:36:23.424084 | orchestrator | ovn-db : Wait for ovn-nb-db --------------------------------------------- 1.26s 2025-06-03 15:36:23.424094 | orchestrator | ovn-db : Set bootstrap args fact for NB (new cluster) ------------------- 1.18s 2025-06-03 15:36:23.424105 | orchestrator | 2025-06-03 15:36:23 | INFO  | Task 2b42b3aa-4396-4a89-a666-075c05fdd1c6 is in state STARTED 2025-06-03 15:36:23.424116 | orchestrator | 2025-06-03 15:36:23 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:36:26.454316 | orchestrator | 2025-06-03 15:36:26 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:36:26.455419 | orchestrator | 2025-06-03 15:36:26 | INFO  | Task 2b42b3aa-4396-4a89-a666-075c05fdd1c6 is in state STARTED 2025-06-03 15:36:26.455615 | orchestrator | 2025-06-03 15:36:26 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:36:29.501113 | orchestrator | 2025-06-03 15:36:29 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:36:29.501187 | orchestrator | 2025-06-03 15:36:29 | INFO  | Task 2b42b3aa-4396-4a89-a666-075c05fdd1c6 is in state STARTED 2025-06-03 15:36:29.501194 | orchestrator | 2025-06-03 15:36:29 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:36:32.531365 | orchestrator | 2025-06-03 15:36:32 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:36:32.531458 | orchestrator | 2025-06-03 15:36:32 | INFO  | Task 2b42b3aa-4396-4a89-a666-075c05fdd1c6 is in state STARTED 2025-06-03 15:36:32.531474 | orchestrator | 2025-06-03 15:36:32 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:36:35.558649 | orchestrator | 2025-06-03 15:36:35 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:36:35.559097 | orchestrator | 2025-06-03 15:36:35 | INFO  | Task 2b42b3aa-4396-4a89-a666-075c05fdd1c6 is in state STARTED 2025-06-03 15:36:35.559145 | orchestrator | 2025-06-03 15:36:35 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:36:38.611436 | orchestrator | 2025-06-03 15:36:38 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:36:38.614147 | orchestrator | 2025-06-03 15:36:38 | INFO  | Task 2b42b3aa-4396-4a89-a666-075c05fdd1c6 is in state STARTED 2025-06-03 15:36:38.614226 | orchestrator | 2025-06-03 15:36:38 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:36:41.652305 | orchestrator | 2025-06-03 15:36:41 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:36:41.652419 | orchestrator | 2025-06-03 15:36:41 | INFO  | Task 87e33e3a-55ec-48cf-80a4-355e201f334d is in state STARTED 2025-06-03 15:36:41.653470 | orchestrator | 2025-06-03 15:36:41 | INFO  | Task 2b42b3aa-4396-4a89-a666-075c05fdd1c6 is in state STARTED 2025-06-03 15:36:41.653503 | orchestrator | 2025-06-03 15:36:41 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:36:44.699322 | orchestrator | 2025-06-03 15:36:44 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:36:44.703586 | orchestrator | 2025-06-03 15:36:44 | INFO  | Task 87e33e3a-55ec-48cf-80a4-355e201f334d is in state STARTED 2025-06-03 15:36:44.703733 | orchestrator | 2025-06-03 15:36:44 | INFO  | Task 2b42b3aa-4396-4a89-a666-075c05fdd1c6 is in state STARTED 2025-06-03 15:36:44.703741 | orchestrator | 2025-06-03 15:36:44 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:36:47.732838 | orchestrator | 2025-06-03 15:36:47 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:36:47.734093 | orchestrator | 2025-06-03 15:36:47 | INFO  | Task 87e33e3a-55ec-48cf-80a4-355e201f334d is in state STARTED 2025-06-03 15:36:47.735553 | orchestrator | 2025-06-03 15:36:47 | INFO  | Task 2b42b3aa-4396-4a89-a666-075c05fdd1c6 is in state STARTED 2025-06-03 15:36:47.735591 | orchestrator | 2025-06-03 15:36:47 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:36:50.783801 | orchestrator | 2025-06-03 15:36:50 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:36:50.783906 | orchestrator | 2025-06-03 15:36:50 | INFO  | Task 87e33e3a-55ec-48cf-80a4-355e201f334d is in state STARTED 2025-06-03 15:36:50.784007 | orchestrator | 2025-06-03 15:36:50 | INFO  | Task 2b42b3aa-4396-4a89-a666-075c05fdd1c6 is in state STARTED 2025-06-03 15:36:50.784024 | orchestrator | 2025-06-03 15:36:50 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:36:53.822581 | orchestrator | 2025-06-03 15:36:53 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:36:53.823926 | orchestrator | 2025-06-03 15:36:53 | INFO  | Task 87e33e3a-55ec-48cf-80a4-355e201f334d is in state STARTED 2025-06-03 15:36:53.825539 | orchestrator | 2025-06-03 15:36:53 | INFO  | Task 2b42b3aa-4396-4a89-a666-075c05fdd1c6 is in state STARTED 2025-06-03 15:36:53.826173 | orchestrator | 2025-06-03 15:36:53 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:36:56.868190 | orchestrator | 2025-06-03 15:36:56 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:36:56.868294 | orchestrator | 2025-06-03 15:36:56 | INFO  | Task 87e33e3a-55ec-48cf-80a4-355e201f334d is in state SUCCESS 2025-06-03 15:36:56.869062 | orchestrator | 2025-06-03 15:36:56 | INFO  | Task 2b42b3aa-4396-4a89-a666-075c05fdd1c6 is in state STARTED 2025-06-03 15:36:56.869102 | orchestrator | 2025-06-03 15:36:56 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:36:59.916076 | orchestrator | 2025-06-03 15:36:59 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:36:59.918062 | orchestrator | 2025-06-03 15:36:59 | INFO  | Task 2b42b3aa-4396-4a89-a666-075c05fdd1c6 is in state STARTED 2025-06-03 15:36:59.918380 | orchestrator | 2025-06-03 15:36:59 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:37:02.961170 | orchestrator | 2025-06-03 15:37:02 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:37:02.962515 | orchestrator | 2025-06-03 15:37:02 | INFO  | Task 2b42b3aa-4396-4a89-a666-075c05fdd1c6 is in state STARTED 2025-06-03 15:37:02.962559 | orchestrator | 2025-06-03 15:37:02 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:37:05.999806 | orchestrator | 2025-06-03 15:37:05 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:37:06.001189 | orchestrator | 2025-06-03 15:37:05 | INFO  | Task 2b42b3aa-4396-4a89-a666-075c05fdd1c6 is in state STARTED 2025-06-03 15:37:06.001290 | orchestrator | 2025-06-03 15:37:05 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:37:09.049792 | orchestrator | 2025-06-03 15:37:09 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:37:09.051851 | orchestrator | 2025-06-03 15:37:09 | INFO  | Task 2b42b3aa-4396-4a89-a666-075c05fdd1c6 is in state STARTED 2025-06-03 15:37:09.051903 | orchestrator | 2025-06-03 15:37:09 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:37:12.093742 | orchestrator | 2025-06-03 15:37:12 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:37:12.094122 | orchestrator | 2025-06-03 15:37:12 | INFO  | Task 2b42b3aa-4396-4a89-a666-075c05fdd1c6 is in state STARTED 2025-06-03 15:37:12.094152 | orchestrator | 2025-06-03 15:37:12 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:37:15.139132 | orchestrator | 2025-06-03 15:37:15 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:37:15.139218 | orchestrator | 2025-06-03 15:37:15 | INFO  | Task 2b42b3aa-4396-4a89-a666-075c05fdd1c6 is in state STARTED 2025-06-03 15:37:15.139229 | orchestrator | 2025-06-03 15:37:15 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:37:18.177133 | orchestrator | 2025-06-03 15:37:18 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:37:18.178820 | orchestrator | 2025-06-03 15:37:18 | INFO  | Task 2b42b3aa-4396-4a89-a666-075c05fdd1c6 is in state STARTED 2025-06-03 15:37:18.178904 | orchestrator | 2025-06-03 15:37:18 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:37:21.219343 | orchestrator | 2025-06-03 15:37:21 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:37:21.221192 | orchestrator | 2025-06-03 15:37:21 | INFO  | Task 2b42b3aa-4396-4a89-a666-075c05fdd1c6 is in state STARTED 2025-06-03 15:37:21.221518 | orchestrator | 2025-06-03 15:37:21 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:37:24.272937 | orchestrator | 2025-06-03 15:37:24 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:37:24.274390 | orchestrator | 2025-06-03 15:37:24 | INFO  | Task 2b42b3aa-4396-4a89-a666-075c05fdd1c6 is in state STARTED 2025-06-03 15:37:24.274434 | orchestrator | 2025-06-03 15:37:24 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:37:27.325408 | orchestrator | 2025-06-03 15:37:27 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:37:27.326917 | orchestrator | 2025-06-03 15:37:27 | INFO  | Task 2b42b3aa-4396-4a89-a666-075c05fdd1c6 is in state STARTED 2025-06-03 15:37:27.326984 | orchestrator | 2025-06-03 15:37:27 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:37:30.375316 | orchestrator | 2025-06-03 15:37:30 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:37:30.375446 | orchestrator | 2025-06-03 15:37:30 | INFO  | Task 2b42b3aa-4396-4a89-a666-075c05fdd1c6 is in state STARTED 2025-06-03 15:37:30.375459 | orchestrator | 2025-06-03 15:37:30 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:37:33.425115 | orchestrator | 2025-06-03 15:37:33 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:37:33.425218 | orchestrator | 2025-06-03 15:37:33 | INFO  | Task 2b42b3aa-4396-4a89-a666-075c05fdd1c6 is in state STARTED 2025-06-03 15:37:33.425236 | orchestrator | 2025-06-03 15:37:33 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:37:36.468140 | orchestrator | 2025-06-03 15:37:36 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:37:36.471947 | orchestrator | 2025-06-03 15:37:36 | INFO  | Task 2b42b3aa-4396-4a89-a666-075c05fdd1c6 is in state STARTED 2025-06-03 15:37:36.472084 | orchestrator | 2025-06-03 15:37:36 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:37:39.517821 | orchestrator | 2025-06-03 15:37:39 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:37:39.518729 | orchestrator | 2025-06-03 15:37:39 | INFO  | Task 2b42b3aa-4396-4a89-a666-075c05fdd1c6 is in state STARTED 2025-06-03 15:37:39.518992 | orchestrator | 2025-06-03 15:37:39 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:37:42.564006 | orchestrator | 2025-06-03 15:37:42 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:37:42.566152 | orchestrator | 2025-06-03 15:37:42 | INFO  | Task 2b42b3aa-4396-4a89-a666-075c05fdd1c6 is in state STARTED 2025-06-03 15:37:42.566214 | orchestrator | 2025-06-03 15:37:42 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:37:45.615269 | orchestrator | 2025-06-03 15:37:45 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:37:45.616851 | orchestrator | 2025-06-03 15:37:45 | INFO  | Task 2b42b3aa-4396-4a89-a666-075c05fdd1c6 is in state STARTED 2025-06-03 15:37:45.616888 | orchestrator | 2025-06-03 15:37:45 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:37:48.662360 | orchestrator | 2025-06-03 15:37:48 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:37:48.663402 | orchestrator | 2025-06-03 15:37:48 | INFO  | Task 2b42b3aa-4396-4a89-a666-075c05fdd1c6 is in state STARTED 2025-06-03 15:37:48.663438 | orchestrator | 2025-06-03 15:37:48 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:37:51.709014 | orchestrator | 2025-06-03 15:37:51 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:37:51.709225 | orchestrator | 2025-06-03 15:37:51 | INFO  | Task 2b42b3aa-4396-4a89-a666-075c05fdd1c6 is in state STARTED 2025-06-03 15:37:51.709248 | orchestrator | 2025-06-03 15:37:51 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:37:54.759468 | orchestrator | 2025-06-03 15:37:54 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:37:54.761266 | orchestrator | 2025-06-03 15:37:54 | INFO  | Task 2b42b3aa-4396-4a89-a666-075c05fdd1c6 is in state STARTED 2025-06-03 15:37:54.761321 | orchestrator | 2025-06-03 15:37:54 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:37:57.807217 | orchestrator | 2025-06-03 15:37:57 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:37:57.809527 | orchestrator | 2025-06-03 15:37:57 | INFO  | Task 2b42b3aa-4396-4a89-a666-075c05fdd1c6 is in state STARTED 2025-06-03 15:37:57.809633 | orchestrator | 2025-06-03 15:37:57 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:38:00.852545 | orchestrator | 2025-06-03 15:38:00 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:38:00.856879 | orchestrator | 2025-06-03 15:38:00 | INFO  | Task 2b42b3aa-4396-4a89-a666-075c05fdd1c6 is in state STARTED 2025-06-03 15:38:00.857047 | orchestrator | 2025-06-03 15:38:00 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:38:03.892203 | orchestrator | 2025-06-03 15:38:03 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:38:03.893758 | orchestrator | 2025-06-03 15:38:03 | INFO  | Task 2b42b3aa-4396-4a89-a666-075c05fdd1c6 is in state STARTED 2025-06-03 15:38:03.893804 | orchestrator | 2025-06-03 15:38:03 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:38:06.947025 | orchestrator | 2025-06-03 15:38:06 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:38:06.949034 | orchestrator | 2025-06-03 15:38:06 | INFO  | Task 2b42b3aa-4396-4a89-a666-075c05fdd1c6 is in state STARTED 2025-06-03 15:38:06.949485 | orchestrator | 2025-06-03 15:38:06 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:38:10.003216 | orchestrator | 2025-06-03 15:38:09 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:38:10.004326 | orchestrator | 2025-06-03 15:38:10 | INFO  | Task 2b42b3aa-4396-4a89-a666-075c05fdd1c6 is in state STARTED 2025-06-03 15:38:10.004689 | orchestrator | 2025-06-03 15:38:10 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:38:13.046333 | orchestrator | 2025-06-03 15:38:13 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:38:13.046429 | orchestrator | 2025-06-03 15:38:13 | INFO  | Task 2b42b3aa-4396-4a89-a666-075c05fdd1c6 is in state STARTED 2025-06-03 15:38:13.046441 | orchestrator | 2025-06-03 15:38:13 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:38:16.082446 | orchestrator | 2025-06-03 15:38:16 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:38:16.084578 | orchestrator | 2025-06-03 15:38:16 | INFO  | Task 2b42b3aa-4396-4a89-a666-075c05fdd1c6 is in state STARTED 2025-06-03 15:38:16.084634 | orchestrator | 2025-06-03 15:38:16 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:38:19.130890 | orchestrator | 2025-06-03 15:38:19 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:38:19.133052 | orchestrator | 2025-06-03 15:38:19 | INFO  | Task 2b42b3aa-4396-4a89-a666-075c05fdd1c6 is in state STARTED 2025-06-03 15:38:19.133079 | orchestrator | 2025-06-03 15:38:19 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:38:22.173436 | orchestrator | 2025-06-03 15:38:22 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:38:22.174966 | orchestrator | 2025-06-03 15:38:22 | INFO  | Task 2b42b3aa-4396-4a89-a666-075c05fdd1c6 is in state STARTED 2025-06-03 15:38:22.175044 | orchestrator | 2025-06-03 15:38:22 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:38:25.217862 | orchestrator | 2025-06-03 15:38:25 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:38:25.218510 | orchestrator | 2025-06-03 15:38:25 | INFO  | Task 2b42b3aa-4396-4a89-a666-075c05fdd1c6 is in state STARTED 2025-06-03 15:38:25.218755 | orchestrator | 2025-06-03 15:38:25 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:38:28.260207 | orchestrator | 2025-06-03 15:38:28 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:38:28.273143 | orchestrator | 2025-06-03 15:38:28.273253 | orchestrator | None 2025-06-03 15:38:28.273277 | orchestrator | 2025-06-03 15:38:28.273297 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-03 15:38:28.273436 | orchestrator | 2025-06-03 15:38:28.273462 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-03 15:38:28.273481 | orchestrator | Tuesday 03 June 2025 15:32:16 +0000 (0:00:00.533) 0:00:00.533 ********** 2025-06-03 15:38:28.273495 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:38:28.273506 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:38:28.273516 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:38:28.273525 | orchestrator | 2025-06-03 15:38:28.273535 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-03 15:38:28.273545 | orchestrator | Tuesday 03 June 2025 15:32:17 +0000 (0:00:00.366) 0:00:00.899 ********** 2025-06-03 15:38:28.273555 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2025-06-03 15:38:28.273565 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2025-06-03 15:38:28.273575 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2025-06-03 15:38:28.273584 | orchestrator | 2025-06-03 15:38:28.273594 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2025-06-03 15:38:28.273603 | orchestrator | 2025-06-03 15:38:28.273613 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-06-03 15:38:28.273622 | orchestrator | Tuesday 03 June 2025 15:32:17 +0000 (0:00:00.618) 0:00:01.518 ********** 2025-06-03 15:38:28.273632 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:38:28.273710 | orchestrator | 2025-06-03 15:38:28.273723 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2025-06-03 15:38:28.273734 | orchestrator | Tuesday 03 June 2025 15:32:18 +0000 (0:00:00.578) 0:00:02.096 ********** 2025-06-03 15:38:28.273745 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:38:28.273755 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:38:28.273766 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:38:28.273777 | orchestrator | 2025-06-03 15:38:28.273788 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-06-03 15:38:28.273799 | orchestrator | Tuesday 03 June 2025 15:32:19 +0000 (0:00:00.699) 0:00:02.795 ********** 2025-06-03 15:38:28.273810 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:38:28.273822 | orchestrator | 2025-06-03 15:38:28.273838 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2025-06-03 15:38:28.273861 | orchestrator | Tuesday 03 June 2025 15:32:20 +0000 (0:00:01.148) 0:00:03.944 ********** 2025-06-03 15:38:28.273880 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:38:28.273896 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:38:28.273911 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:38:28.274194 | orchestrator | 2025-06-03 15:38:28.274226 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2025-06-03 15:38:28.274244 | orchestrator | Tuesday 03 June 2025 15:32:20 +0000 (0:00:00.671) 0:00:04.615 ********** 2025-06-03 15:38:28.274256 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-06-03 15:38:28.274269 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-06-03 15:38:28.274281 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-06-03 15:38:28.274296 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-06-03 15:38:28.274309 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-06-03 15:38:28.274322 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-06-03 15:38:28.274336 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-06-03 15:38:28.274349 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-06-03 15:38:28.274377 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-06-03 15:38:28.274386 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-06-03 15:38:28.274394 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-06-03 15:38:28.274402 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-06-03 15:38:28.274410 | orchestrator | 2025-06-03 15:38:28.274418 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-06-03 15:38:28.274425 | orchestrator | Tuesday 03 June 2025 15:32:25 +0000 (0:00:04.148) 0:00:08.763 ********** 2025-06-03 15:38:28.274443 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-06-03 15:38:28.274452 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-06-03 15:38:28.274459 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-06-03 15:38:28.274467 | orchestrator | 2025-06-03 15:38:28.274543 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-06-03 15:38:28.274552 | orchestrator | Tuesday 03 June 2025 15:32:26 +0000 (0:00:01.166) 0:00:09.930 ********** 2025-06-03 15:38:28.274559 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-06-03 15:38:28.274568 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-06-03 15:38:28.274575 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-06-03 15:38:28.274589 | orchestrator | 2025-06-03 15:38:28.274601 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-06-03 15:38:28.274614 | orchestrator | Tuesday 03 June 2025 15:32:28 +0000 (0:00:02.220) 0:00:12.150 ********** 2025-06-03 15:38:28.274628 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2025-06-03 15:38:28.274662 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:28.274696 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2025-06-03 15:38:28.274711 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:28.274724 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2025-06-03 15:38:28.274737 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:28.274748 | orchestrator | 2025-06-03 15:38:28.274756 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2025-06-03 15:38:28.274764 | orchestrator | Tuesday 03 June 2025 15:32:29 +0000 (0:00:01.447) 0:00:13.597 ********** 2025-06-03 15:38:28.274775 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-06-03 15:38:28.274791 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-06-03 15:38:28.274805 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-06-03 15:38:28.274832 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-03 15:38:28.274854 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-03 15:38:28.274878 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-03 15:38:28.274889 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-03 15:38:28.274898 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-03 15:38:28.274906 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-03 15:38:28.274915 | orchestrator | 2025-06-03 15:38:28.274923 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2025-06-03 15:38:28.274931 | orchestrator | Tuesday 03 June 2025 15:32:31 +0000 (0:00:02.049) 0:00:15.646 ********** 2025-06-03 15:38:28.274945 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:38:28.274954 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:38:28.274961 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:38:28.274969 | orchestrator | 2025-06-03 15:38:28.274978 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2025-06-03 15:38:28.274992 | orchestrator | Tuesday 03 June 2025 15:32:33 +0000 (0:00:01.460) 0:00:17.107 ********** 2025-06-03 15:38:28.275066 | orchestrator | changed: [testbed-node-0] => (item=users) 2025-06-03 15:38:28.275077 | orchestrator | changed: [testbed-node-1] => (item=users) 2025-06-03 15:38:28.275093 | orchestrator | changed: [testbed-node-2] => (item=users) 2025-06-03 15:38:28.275107 | orchestrator | changed: [testbed-node-0] => (item=rules) 2025-06-03 15:38:28.275120 | orchestrator | changed: [testbed-node-1] => (item=rules) 2025-06-03 15:38:28.275133 | orchestrator | changed: [testbed-node-2] => (item=rules) 2025-06-03 15:38:28.275149 | orchestrator | 2025-06-03 15:38:28.275163 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2025-06-03 15:38:28.275177 | orchestrator | Tuesday 03 June 2025 15:32:35 +0000 (0:00:02.149) 0:00:19.256 ********** 2025-06-03 15:38:28.275191 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:38:28.275206 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:38:28.275220 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:38:28.275235 | orchestrator | 2025-06-03 15:38:28.275250 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2025-06-03 15:38:28.275264 | orchestrator | Tuesday 03 June 2025 15:32:38 +0000 (0:00:02.540) 0:00:21.797 ********** 2025-06-03 15:38:28.275279 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:38:28.275294 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:38:28.275307 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:38:28.275321 | orchestrator | 2025-06-03 15:38:28.275329 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2025-06-03 15:38:28.275337 | orchestrator | Tuesday 03 June 2025 15:32:40 +0000 (0:00:02.190) 0:00:23.988 ********** 2025-06-03 15:38:28.275351 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-03 15:38:28.275371 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-03 15:38:28.275380 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-03 15:38:28.275390 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__c0e7e5dd4e22955c0b7a8644194a752ab1b7f21c', '__omit_place_holder__c0e7e5dd4e22955c0b7a8644194a752ab1b7f21c'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-06-03 15:38:28.275435 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:28.275445 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-03 15:38:28.275453 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-03 15:38:28.275462 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-03 15:38:28.275475 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__c0e7e5dd4e22955c0b7a8644194a752ab1b7f21c', '__omit_place_holder__c0e7e5dd4e22955c0b7a8644194a752ab1b7f21c'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-06-03 15:38:28.275483 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:28.275498 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-03 15:38:28.275506 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-03 15:38:28.275520 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-03 15:38:28.275528 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__c0e7e5dd4e22955c0b7a8644194a752ab1b7f21c', '__omit_place_holder__c0e7e5dd4e22955c0b7a8644194a752ab1b7f21c'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-06-03 15:38:28.275537 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:28.275569 | orchestrator | 2025-06-03 15:38:28.275578 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2025-06-03 15:38:28.275586 | orchestrator | Tuesday 03 June 2025 15:32:41 +0000 (0:00:01.163) 0:00:25.151 ********** 2025-06-03 15:38:28.275594 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-06-03 15:38:28.275607 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-06-03 15:38:28.275623 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-06-03 15:38:28.275695 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-03 15:38:28.275705 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-03 15:38:28.275714 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__c0e7e5dd4e22955c0b7a8644194a752ab1b7f21c', '__omit_place_holder__c0e7e5dd4e22955c0b7a8644194a752ab1b7f21c'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-06-03 15:38:28.275722 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-03 15:38:28.275730 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-03 15:38:28.275742 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-03 15:38:28.275756 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-03 15:38:28.275771 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__c0e7e5dd4e22955c0b7a8644194a752ab1b7f21c', '__omit_place_holder__c0e7e5dd4e22955c0b7a8644194a752ab1b7f21c'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-06-03 15:38:28.275779 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__c0e7e5dd4e22955c0b7a8644194a752ab1b7f21c', '__omit_place_holder__c0e7e5dd4e22955c0b7a8644194a752ab1b7f21c'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-06-03 15:38:28.275788 | orchestrator | 2025-06-03 15:38:28.275796 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2025-06-03 15:38:28.275803 | orchestrator | Tuesday 03 June 2025 15:32:45 +0000 (0:00:04.119) 0:00:29.270 ********** 2025-06-03 15:38:28.275812 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-06-03 15:38:28.275820 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-06-03 15:38:28.275836 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-06-03 15:38:28.275850 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-03 15:38:28.275864 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-03 15:38:28.275905 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-03 15:38:28.275921 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-03 15:38:28.275935 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-03 15:38:28.275969 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-03 15:38:28.275979 | orchestrator | 2025-06-03 15:38:28.275987 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2025-06-03 15:38:28.275995 | orchestrator | Tuesday 03 June 2025 15:32:49 +0000 (0:00:03.607) 0:00:32.878 ********** 2025-06-03 15:38:28.276004 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-06-03 15:38:28.276012 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-06-03 15:38:28.276021 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-06-03 15:38:28.276059 | orchestrator | 2025-06-03 15:38:28.276072 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2025-06-03 15:38:28.276084 | orchestrator | Tuesday 03 June 2025 15:32:50 +0000 (0:00:01.707) 0:00:34.586 ********** 2025-06-03 15:38:28.276097 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-06-03 15:38:28.276110 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-06-03 15:38:28.276132 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2)[2025-06-03 15:38:28 | INFO  | Task 2b42b3aa-4396-4a89-a666-075c05fdd1c6 is in state SUCCESS 2025-06-03 15:38:28.276147 | orchestrator | 0m 2025-06-03 15:38:28.276162 | orchestrator | 2025-06-03 15:38:28.276171 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2025-06-03 15:38:28.276179 | orchestrator | Tuesday 03 June 2025 15:32:55 +0000 (0:00:04.228) 0:00:38.815 ********** 2025-06-03 15:38:28.276187 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:28.276195 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:28.276203 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:28.276211 | orchestrator | 2025-06-03 15:38:28.276219 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2025-06-03 15:38:28.276226 | orchestrator | Tuesday 03 June 2025 15:32:56 +0000 (0:00:01.421) 0:00:40.237 ********** 2025-06-03 15:38:28.276234 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-06-03 15:38:28.276243 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-06-03 15:38:28.276251 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-06-03 15:38:28.276259 | orchestrator | 2025-06-03 15:38:28.276267 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2025-06-03 15:38:28.276326 | orchestrator | Tuesday 03 June 2025 15:32:58 +0000 (0:00:02.163) 0:00:42.400 ********** 2025-06-03 15:38:28.276335 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-06-03 15:38:28.276343 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-06-03 15:38:28.276352 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-06-03 15:38:28.276360 | orchestrator | 2025-06-03 15:38:28.276367 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2025-06-03 15:38:28.276375 | orchestrator | Tuesday 03 June 2025 15:33:00 +0000 (0:00:01.747) 0:00:44.147 ********** 2025-06-03 15:38:28.276383 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2025-06-03 15:38:28.276391 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2025-06-03 15:38:28.276399 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2025-06-03 15:38:28.276407 | orchestrator | 2025-06-03 15:38:28.276441 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2025-06-03 15:38:28.276476 | orchestrator | Tuesday 03 June 2025 15:33:01 +0000 (0:00:01.281) 0:00:45.428 ********** 2025-06-03 15:38:28.276485 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2025-06-03 15:38:28.276493 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2025-06-03 15:38:28.276500 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2025-06-03 15:38:28.276508 | orchestrator | 2025-06-03 15:38:28.276516 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-06-03 15:38:28.276524 | orchestrator | Tuesday 03 June 2025 15:33:03 +0000 (0:00:01.917) 0:00:47.346 ********** 2025-06-03 15:38:28.276532 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:38:28.276546 | orchestrator | 2025-06-03 15:38:28.276555 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2025-06-03 15:38:28.276563 | orchestrator | Tuesday 03 June 2025 15:33:04 +0000 (0:00:00.624) 0:00:47.971 ********** 2025-06-03 15:38:28.276571 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-06-03 15:38:28.276585 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-06-03 15:38:28.276602 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-06-03 15:38:28.276610 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-03 15:38:28.276619 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-03 15:38:28.276627 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-03 15:38:28.276664 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-03 15:38:28.276673 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-03 15:38:28.276686 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-03 15:38:28.276694 | orchestrator | 2025-06-03 15:38:28.276702 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2025-06-03 15:38:28.276710 | orchestrator | Tuesday 03 June 2025 15:33:08 +0000 (0:00:03.812) 0:00:51.783 ********** 2025-06-03 15:38:28.276725 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-03 15:38:28.276733 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-03 15:38:28.276742 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-03 15:38:28.276750 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:28.276758 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-03 15:38:28.276771 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-03 15:38:28.276809 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-03 15:38:28.276820 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:28.276833 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-03 15:38:28.276842 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-03 15:38:28.276850 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-03 15:38:28.276858 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:28.276866 | orchestrator | 2025-06-03 15:38:28.276874 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2025-06-03 15:38:28.276882 | orchestrator | Tuesday 03 June 2025 15:33:08 +0000 (0:00:00.863) 0:00:52.647 ********** 2025-06-03 15:38:28.276890 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-03 15:38:28.276952 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-03 15:38:28.276968 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-03 15:38:28.276982 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:28.277002 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-03 15:38:28.277027 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-03 15:38:28.277041 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-03 15:38:28.277053 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:28.277062 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-03 15:38:28.277078 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-03 15:38:28.277086 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-03 15:38:28.277095 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:28.277102 | orchestrator | 2025-06-03 15:38:28.277110 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-06-03 15:38:28.277118 | orchestrator | Tuesday 03 June 2025 15:33:10 +0000 (0:00:01.035) 0:00:53.682 ********** 2025-06-03 15:38:28.277130 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-03 15:38:28.277145 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-03 15:38:28.277154 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-03 15:38:28.277162 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:28.277170 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-03 15:38:28.277184 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-03 15:38:28.277192 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-03 15:38:28.277201 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:28.277236 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-03 15:38:28.277253 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-03 15:38:28.277267 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-03 15:38:28.277295 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:28.277304 | orchestrator | 2025-06-03 15:38:28.277312 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-06-03 15:38:28.277339 | orchestrator | Tuesday 03 June 2025 15:33:10 +0000 (0:00:00.659) 0:00:54.342 ********** 2025-06-03 15:38:28.277347 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-03 15:38:28.277362 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-03 15:38:28.277371 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-03 15:38:28.277379 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:28.277387 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-03 15:38:28.277400 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-03 15:38:28.277409 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-03 15:38:28.277417 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:28.277431 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-03 15:38:28.277451 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-03 15:38:28.277465 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-03 15:38:28.277479 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:28.277492 | orchestrator | 2025-06-03 15:38:28.277505 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-06-03 15:38:28.277518 | orchestrator | Tuesday 03 June 2025 15:33:11 +0000 (0:00:00.620) 0:00:54.963 ********** 2025-06-03 15:38:28.277532 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-03 15:38:28.277547 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-03 15:38:28.277567 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-03 15:38:28.277581 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:28.278412 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-03 15:38:28.278487 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-03 15:38:28.278497 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-03 15:38:28.278506 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:28.278514 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-03 15:38:28.278523 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-03 15:38:28.278531 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-03 15:38:28.278539 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:28.278547 | orchestrator | 2025-06-03 15:38:28.278560 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2025-06-03 15:38:28.278568 | orchestrator | Tuesday 03 June 2025 15:33:12 +0000 (0:00:01.099) 0:00:56.062 ********** 2025-06-03 15:38:28.278577 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-03 15:38:28.278621 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-03 15:38:28.278632 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-03 15:38:28.278710 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:28.278720 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-03 15:38:28.278728 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-03 15:38:28.278737 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-03 15:38:28.278745 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:28.278758 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-03 15:38:28.278778 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-03 15:38:28.278787 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-03 15:38:28.278795 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:28.278803 | orchestrator | 2025-06-03 15:38:28.278811 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2025-06-03 15:38:28.278819 | orchestrator | Tuesday 03 June 2025 15:33:13 +0000 (0:00:00.610) 0:00:56.673 ********** 2025-06-03 15:38:28.278827 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-03 15:38:28.278836 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-03 15:38:28.278844 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-03 15:38:28.278852 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:28.278860 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-03 15:38:28.278878 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-03 15:38:28.278895 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-03 15:38:28.278909 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:28.278921 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-03 15:38:28.278966 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-03 15:38:28.278981 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-03 15:38:28.278994 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:28.279008 | orchestrator | 2025-06-03 15:38:28.279021 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2025-06-03 15:38:28.279034 | orchestrator | Tuesday 03 June 2025 15:33:13 +0000 (0:00:00.610) 0:00:57.284 ********** 2025-06-03 15:38:28.279048 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-03 15:38:28.279141 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-03 15:38:28.279160 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-03 15:38:28.279170 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:28.279187 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-03 15:38:28.279197 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-03 15:38:28.279206 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-03 15:38:28.279214 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:28.279221 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-03 15:38:28.279229 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-03 15:38:28.279249 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-03 15:38:28.279257 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:28.279265 | orchestrator | 2025-06-03 15:38:28.279272 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2025-06-03 15:38:28.279280 | orchestrator | Tuesday 03 June 2025 15:33:14 +0000 (0:00:01.035) 0:00:58.319 ********** 2025-06-03 15:38:28.279288 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-06-03 15:38:28.279296 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-06-03 15:38:28.279308 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-06-03 15:38:28.279315 | orchestrator | 2025-06-03 15:38:28.279322 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2025-06-03 15:38:28.279328 | orchestrator | Tuesday 03 June 2025 15:33:16 +0000 (0:00:02.013) 0:01:00.332 ********** 2025-06-03 15:38:28.279335 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-06-03 15:38:28.279342 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-06-03 15:38:28.279348 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-06-03 15:38:28.279355 | orchestrator | 2025-06-03 15:38:28.279362 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2025-06-03 15:38:28.279368 | orchestrator | Tuesday 03 June 2025 15:33:19 +0000 (0:00:02.342) 0:01:02.675 ********** 2025-06-03 15:38:28.279375 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-06-03 15:38:28.279382 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-06-03 15:38:28.279388 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-06-03 15:38:28.279395 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-06-03 15:38:28.279401 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:28.279408 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-06-03 15:38:28.279414 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:28.279421 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-06-03 15:38:28.279428 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:28.279434 | orchestrator | 2025-06-03 15:38:28.279441 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2025-06-03 15:38:28.279447 | orchestrator | Tuesday 03 June 2025 15:33:20 +0000 (0:00:01.292) 0:01:03.968 ********** 2025-06-03 15:38:28.279454 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-06-03 15:38:28.279512 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-06-03 15:38:28.279530 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-06-03 15:38:28.279548 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-03 15:38:28.279608 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-03 15:38:28.279616 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-03 15:38:28.279623 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-03 15:38:28.279684 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-03 15:38:28.279695 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-03 15:38:28.279702 | orchestrator | 2025-06-03 15:38:28.279709 | orchestrator | TASK [include_role : aodh] ***************************************************** 2025-06-03 15:38:28.279716 | orchestrator | Tuesday 03 June 2025 15:33:23 +0000 (0:00:02.923) 0:01:06.892 ********** 2025-06-03 15:38:28.279722 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:38:28.279729 | orchestrator | 2025-06-03 15:38:28.279736 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2025-06-03 15:38:28.279747 | orchestrator | Tuesday 03 June 2025 15:33:24 +0000 (0:00:00.943) 0:01:07.835 ********** 2025-06-03 15:38:28.279760 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-06-03 15:38:28.279768 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-06-03 15:38:28.279775 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-06-03 15:38:28.279787 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-06-03 15:38:28.279795 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.279802 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.279813 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.279824 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.279832 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-06-03 15:38:28.279843 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-06-03 15:38:28.279850 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.279857 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.279864 | orchestrator | 2025-06-03 15:38:28.279871 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2025-06-03 15:38:28.279878 | orchestrator | Tuesday 03 June 2025 15:33:28 +0000 (0:00:04.109) 0:01:11.945 ********** 2025-06-03 15:38:28.279888 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-06-03 15:38:28.279900 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-06-03 15:38:28.279908 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.279919 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.279926 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:28.279933 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-06-03 15:38:28.279940 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-06-03 15:38:28.279950 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.279957 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.279964 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:28.279976 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-06-03 15:38:28.279988 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-06-03 15:38:28.279994 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.280001 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.280008 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:28.280015 | orchestrator | 2025-06-03 15:38:28.280021 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2025-06-03 15:38:28.280028 | orchestrator | Tuesday 03 June 2025 15:33:29 +0000 (0:00:00.714) 0:01:12.660 ********** 2025-06-03 15:38:28.280035 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-06-03 15:38:28.280042 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-06-03 15:38:28.280049 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:28.280056 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-06-03 15:38:28.280063 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-06-03 15:38:28.280070 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:28.280077 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-06-03 15:38:28.280083 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-06-03 15:38:28.280089 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:28.280096 | orchestrator | 2025-06-03 15:38:28.280105 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2025-06-03 15:38:28.280134 | orchestrator | Tuesday 03 June 2025 15:33:30 +0000 (0:00:01.097) 0:01:13.758 ********** 2025-06-03 15:38:28.280141 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:38:28.280147 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:38:28.280153 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:38:28.280159 | orchestrator | 2025-06-03 15:38:28.280165 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2025-06-03 15:38:28.280171 | orchestrator | Tuesday 03 June 2025 15:33:31 +0000 (0:00:01.571) 0:01:15.330 ********** 2025-06-03 15:38:28.280206 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:38:28.280213 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:38:28.280219 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:38:28.280225 | orchestrator | 2025-06-03 15:38:28.280231 | orchestrator | TASK [include_role : barbican] ************************************************* 2025-06-03 15:38:28.280237 | orchestrator | Tuesday 03 June 2025 15:33:33 +0000 (0:00:01.953) 0:01:17.283 ********** 2025-06-03 15:38:28.280243 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:38:28.280249 | orchestrator | 2025-06-03 15:38:28.280255 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2025-06-03 15:38:28.280262 | orchestrator | Tuesday 03 June 2025 15:33:34 +0000 (0:00:00.603) 0:01:17.886 ********** 2025-06-03 15:38:28.280269 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-03 15:38:28.280276 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.280290 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-03 15:38:28.280297 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.280313 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.280320 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.280327 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-03 15:38:28.280333 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.280343 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.280354 | orchestrator | 2025-06-03 15:38:28.280361 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2025-06-03 15:38:28.280367 | orchestrator | Tuesday 03 June 2025 15:33:37 +0000 (0:00:03.225) 0:01:21.111 ********** 2025-06-03 15:38:28.280378 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-03 15:38:28.280384 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.280391 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.280398 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:28.280404 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-03 15:38:28.280411 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.280425 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.280432 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:28.280443 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-03 15:38:28.280450 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.280456 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.280463 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:28.280469 | orchestrator | 2025-06-03 15:38:28.280475 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2025-06-03 15:38:28.280481 | orchestrator | Tuesday 03 June 2025 15:33:38 +0000 (0:00:00.834) 0:01:21.946 ********** 2025-06-03 15:38:28.280488 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-06-03 15:38:28.280495 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-06-03 15:38:28.280501 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:28.280508 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-06-03 15:38:28.280518 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-06-03 15:38:28.280525 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:28.280535 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-06-03 15:38:28.280541 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-06-03 15:38:28.280548 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:28.280554 | orchestrator | 2025-06-03 15:38:28.280560 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2025-06-03 15:38:28.280566 | orchestrator | Tuesday 03 June 2025 15:33:38 +0000 (0:00:00.701) 0:01:22.647 ********** 2025-06-03 15:38:28.280572 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:38:28.280578 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:38:28.280585 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:38:28.280591 | orchestrator | 2025-06-03 15:38:28.280597 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2025-06-03 15:38:28.280603 | orchestrator | Tuesday 03 June 2025 15:33:40 +0000 (0:00:01.342) 0:01:23.990 ********** 2025-06-03 15:38:28.280609 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:38:28.280615 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:38:28.280621 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:38:28.280628 | orchestrator | 2025-06-03 15:38:28.280660 | orchestrator | TASK [include_role : blazar] *************************************************** 2025-06-03 15:38:28.280667 | orchestrator | Tuesday 03 June 2025 15:33:42 +0000 (0:00:02.580) 0:01:26.571 ********** 2025-06-03 15:38:28.280674 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:28.280680 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:28.280686 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:28.280692 | orchestrator | 2025-06-03 15:38:28.280698 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2025-06-03 15:38:28.280705 | orchestrator | Tuesday 03 June 2025 15:33:43 +0000 (0:00:00.627) 0:01:27.199 ********** 2025-06-03 15:38:28.280711 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:38:28.280717 | orchestrator | 2025-06-03 15:38:28.280723 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2025-06-03 15:38:28.280794 | orchestrator | Tuesday 03 June 2025 15:33:44 +0000 (0:00:00.625) 0:01:27.824 ********** 2025-06-03 15:38:28.280802 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-06-03 15:38:28.280809 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-06-03 15:38:28.280821 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-06-03 15:38:28.280828 | orchestrator | 2025-06-03 15:38:28.280838 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2025-06-03 15:38:28.280845 | orchestrator | Tuesday 03 June 2025 15:33:46 +0000 (0:00:02.371) 0:01:30.196 ********** 2025-06-03 15:38:28.281033 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-06-03 15:38:28.281043 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:28.281049 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-06-03 15:38:28.281056 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:28.281062 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-06-03 15:38:28.281074 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:28.281081 | orchestrator | 2025-06-03 15:38:28.281087 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2025-06-03 15:38:28.281093 | orchestrator | Tuesday 03 June 2025 15:33:48 +0000 (0:00:01.508) 0:01:31.705 ********** 2025-06-03 15:38:28.281100 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-06-03 15:38:28.281108 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-06-03 15:38:28.281123 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:28.281133 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-06-03 15:38:28.281140 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-06-03 15:38:28.281146 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:28.281156 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-06-03 15:38:28.281163 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-06-03 15:38:28.281170 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:28.281176 | orchestrator | 2025-06-03 15:38:28.281182 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2025-06-03 15:38:28.281188 | orchestrator | Tuesday 03 June 2025 15:33:49 +0000 (0:00:01.793) 0:01:33.498 ********** 2025-06-03 15:38:28.281194 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:28.281201 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:28.281207 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:28.281213 | orchestrator | 2025-06-03 15:38:28.281219 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2025-06-03 15:38:28.281230 | orchestrator | Tuesday 03 June 2025 15:33:50 +0000 (0:00:00.415) 0:01:33.914 ********** 2025-06-03 15:38:28.281237 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:28.281243 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:28.281249 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:28.281255 | orchestrator | 2025-06-03 15:38:28.281261 | orchestrator | TASK [include_role : cinder] *************************************************** 2025-06-03 15:38:28.281267 | orchestrator | Tuesday 03 June 2025 15:33:51 +0000 (0:00:01.314) 0:01:35.228 ********** 2025-06-03 15:38:28.281274 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:38:28.281280 | orchestrator | 2025-06-03 15:38:28.281286 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2025-06-03 15:38:28.281292 | orchestrator | Tuesday 03 June 2025 15:33:52 +0000 (0:00:00.953) 0:01:36.181 ********** 2025-06-03 15:38:28.281298 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-03 15:38:28.281306 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.281316 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.281327 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.281338 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-03 15:38:28.281378 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.281386 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.281399 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.281409 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-03 15:38:28.281421 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.281427 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.281434 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.281440 | orchestrator | 2025-06-03 15:38:28.281447 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2025-06-03 15:38:28.281453 | orchestrator | Tuesday 03 June 2025 15:33:56 +0000 (0:00:03.635) 0:01:39.816 ********** 2025-06-03 15:38:28.281463 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-03 15:38:28.281470 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.281481 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-03 15:38:28.281493 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.281499 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.281506 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.281512 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:28.281522 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.281533 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.281544 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:28.281550 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-03 15:38:28.281557 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.281563 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.281574 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.281580 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:28.281587 | orchestrator | 2025-06-03 15:38:28.281593 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2025-06-03 15:38:28.281599 | orchestrator | Tuesday 03 June 2025 15:33:57 +0000 (0:00:00.929) 0:01:40.746 ********** 2025-06-03 15:38:28.281611 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-06-03 15:38:28.281621 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-06-03 15:38:28.281628 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:28.281634 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-06-03 15:38:28.281665 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-06-03 15:38:28.281676 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:28.281687 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-06-03 15:38:28.281699 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-06-03 15:38:28.281710 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:28.281720 | orchestrator | 2025-06-03 15:38:28.281728 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2025-06-03 15:38:28.281736 | orchestrator | Tuesday 03 June 2025 15:33:58 +0000 (0:00:01.006) 0:01:41.753 ********** 2025-06-03 15:38:28.281743 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:38:28.281750 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:38:28.281757 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:38:28.281764 | orchestrator | 2025-06-03 15:38:28.281771 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2025-06-03 15:38:28.281778 | orchestrator | Tuesday 03 June 2025 15:33:59 +0000 (0:00:01.363) 0:01:43.116 ********** 2025-06-03 15:38:28.281787 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:38:28.281797 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:38:28.281807 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:38:28.281817 | orchestrator | 2025-06-03 15:38:28.281827 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2025-06-03 15:38:28.281837 | orchestrator | Tuesday 03 June 2025 15:34:01 +0000 (0:00:02.177) 0:01:45.293 ********** 2025-06-03 15:38:28.281846 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:28.281856 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:28.281865 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:28.281875 | orchestrator | 2025-06-03 15:38:28.281885 | orchestrator | TASK [include_role : cyborg] *************************************************** 2025-06-03 15:38:28.281895 | orchestrator | Tuesday 03 June 2025 15:34:02 +0000 (0:00:00.666) 0:01:45.960 ********** 2025-06-03 15:38:28.281907 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:28.281914 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:28.281921 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:28.281928 | orchestrator | 2025-06-03 15:38:28.281935 | orchestrator | TASK [include_role : designate] ************************************************ 2025-06-03 15:38:28.281946 | orchestrator | Tuesday 03 June 2025 15:34:02 +0000 (0:00:00.610) 0:01:46.570 ********** 2025-06-03 15:38:28.281999 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:38:28.282010 | orchestrator | 2025-06-03 15:38:28.282057 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2025-06-03 15:38:28.282069 | orchestrator | Tuesday 03 June 2025 15:34:03 +0000 (0:00:00.818) 0:01:47.389 ********** 2025-06-03 15:38:28.282096 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-03 15:38:28.282119 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-03 15:38:28.282131 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.282143 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.282155 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.282167 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.282185 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.282201 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-03 15:38:28.282219 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-03 15:38:28.282231 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-03 15:38:28.282242 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.282254 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-03 15:38:28.282271 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.282287 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.282309 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.282321 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.282332 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.282342 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.282353 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.282370 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.282390 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.282401 | orchestrator | 2025-06-03 15:38:28.282412 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2025-06-03 15:38:28.282422 | orchestrator | Tuesday 03 June 2025 15:34:09 +0000 (0:00:06.083) 0:01:53.473 ********** 2025-06-03 15:38:28.282438 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-03 15:38:28.282449 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-03 15:38:28.282460 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.282476 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.282486 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.282500 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.282517 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-03 15:38:28.282572 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.282584 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:28.282595 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-03 15:38:28.282614 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.282624 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.282657 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.282675 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-03 15:38:28.282687 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.282697 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-03 15:38:28.282713 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.282724 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.282740 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.282751 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:28.282761 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.282778 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.282788 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.282799 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:28.282809 | orchestrator | 2025-06-03 15:38:28.282819 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2025-06-03 15:38:28.282837 | orchestrator | Tuesday 03 June 2025 15:34:10 +0000 (0:00:01.144) 0:01:54.617 ********** 2025-06-03 15:38:28.282848 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-06-03 15:38:28.282859 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-06-03 15:38:28.282866 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:28.282873 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-06-03 15:38:28.282879 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-06-03 15:38:28.282885 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:28.282892 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-06-03 15:38:28.282898 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-06-03 15:38:28.282904 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:28.282911 | orchestrator | 2025-06-03 15:38:28.282917 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2025-06-03 15:38:28.282923 | orchestrator | Tuesday 03 June 2025 15:34:12 +0000 (0:00:01.173) 0:01:55.791 ********** 2025-06-03 15:38:28.282929 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:38:28.282935 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:38:28.282941 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:38:28.282947 | orchestrator | 2025-06-03 15:38:28.282953 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2025-06-03 15:38:28.282959 | orchestrator | Tuesday 03 June 2025 15:34:13 +0000 (0:00:01.810) 0:01:57.602 ********** 2025-06-03 15:38:28.282966 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:38:28.282972 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:38:28.282978 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:38:28.282984 | orchestrator | 2025-06-03 15:38:28.282990 | orchestrator | TASK [include_role : etcd] ***************************************************** 2025-06-03 15:38:28.283002 | orchestrator | Tuesday 03 June 2025 15:34:16 +0000 (0:00:02.134) 0:01:59.736 ********** 2025-06-03 15:38:28.283009 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:28.283015 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:28.283021 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:28.283027 | orchestrator | 2025-06-03 15:38:28.283033 | orchestrator | TASK [include_role : glance] *************************************************** 2025-06-03 15:38:28.283039 | orchestrator | Tuesday 03 June 2025 15:34:16 +0000 (0:00:00.318) 0:02:00.055 ********** 2025-06-03 15:38:28.283045 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:38:28.283051 | orchestrator | 2025-06-03 15:38:28.283058 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2025-06-03 15:38:28.283064 | orchestrator | Tuesday 03 June 2025 15:34:17 +0000 (0:00:00.980) 0:02:01.036 ********** 2025-06-03 15:38:28.283110 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-03 15:38:28.283131 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-06-03 15:38:28.283144 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-03 15:38:28.283156 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-06-03 15:38:28.283183 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-03 15:38:28.283197 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-06-03 15:38:28.283204 | orchestrator | 2025-06-03 15:38:28.283210 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2025-06-03 15:38:28.283216 | orchestrator | Tuesday 03 June 2025 15:34:22 +0000 (0:00:05.289) 0:02:06.325 ********** 2025-06-03 15:38:28.283231 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-03 15:38:28.283245 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-06-03 15:38:28.283252 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:28.283262 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-03 15:38:28.283275 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-06-03 15:38:28.283286 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:28.283296 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-03 15:38:28.283320 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-06-03 15:38:28.283331 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:28.283337 | orchestrator | 2025-06-03 15:38:28.283344 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2025-06-03 15:38:28.283350 | orchestrator | Tuesday 03 June 2025 15:34:26 +0000 (0:00:04.159) 0:02:10.485 ********** 2025-06-03 15:38:28.283356 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-06-03 15:38:28.283363 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-06-03 15:38:28.283370 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:28.283376 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-06-03 15:38:28.283387 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-06-03 15:38:28.283393 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:28.283404 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-06-03 15:38:28.283416 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-06-03 15:38:28.283423 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:28.283429 | orchestrator | 2025-06-03 15:38:28.283436 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2025-06-03 15:38:28.283442 | orchestrator | Tuesday 03 June 2025 15:34:30 +0000 (0:00:03.194) 0:02:13.679 ********** 2025-06-03 15:38:28.283448 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:38:28.283454 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:38:28.283460 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:38:28.283467 | orchestrator | 2025-06-03 15:38:28.283473 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2025-06-03 15:38:28.283479 | orchestrator | Tuesday 03 June 2025 15:34:31 +0000 (0:00:01.629) 0:02:15.308 ********** 2025-06-03 15:38:28.283485 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:38:28.283491 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:38:28.283497 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:38:28.283503 | orchestrator | 2025-06-03 15:38:28.283509 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2025-06-03 15:38:28.283515 | orchestrator | Tuesday 03 June 2025 15:34:33 +0000 (0:00:01.886) 0:02:17.195 ********** 2025-06-03 15:38:28.283522 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:28.283528 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:28.283534 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:28.283540 | orchestrator | 2025-06-03 15:38:28.283546 | orchestrator | TASK [include_role : grafana] ************************************************** 2025-06-03 15:38:28.283553 | orchestrator | Tuesday 03 June 2025 15:34:33 +0000 (0:00:00.307) 0:02:17.503 ********** 2025-06-03 15:38:28.283559 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:38:28.283565 | orchestrator | 2025-06-03 15:38:28.283571 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2025-06-03 15:38:28.283577 | orchestrator | Tuesday 03 June 2025 15:34:34 +0000 (0:00:00.824) 0:02:18.327 ********** 2025-06-03 15:38:28.283584 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-03 15:38:28.283591 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-03 15:38:28.283605 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-03 15:38:28.283612 | orchestrator | 2025-06-03 15:38:28.283618 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2025-06-03 15:38:28.283624 | orchestrator | Tuesday 03 June 2025 15:34:38 +0000 (0:00:03.450) 0:02:21.777 ********** 2025-06-03 15:38:28.283747 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-03 15:38:28.283760 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:28.283767 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-03 15:38:28.283773 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:28.283780 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-03 15:38:28.283786 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:28.283792 | orchestrator | 2025-06-03 15:38:28.283799 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2025-06-03 15:38:28.283805 | orchestrator | Tuesday 03 June 2025 15:34:38 +0000 (0:00:00.480) 0:02:22.257 ********** 2025-06-03 15:38:28.283811 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-06-03 15:38:28.283824 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-06-03 15:38:28.283830 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-06-03 15:38:28.283836 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:28.283843 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-06-03 15:38:28.283849 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:28.283855 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-06-03 15:38:28.283866 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-06-03 15:38:28.283872 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:28.283879 | orchestrator | 2025-06-03 15:38:28.283885 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2025-06-03 15:38:28.283891 | orchestrator | Tuesday 03 June 2025 15:34:39 +0000 (0:00:00.710) 0:02:22.968 ********** 2025-06-03 15:38:28.283897 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:38:28.283903 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:38:28.283909 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:38:28.283915 | orchestrator | 2025-06-03 15:38:28.283921 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2025-06-03 15:38:28.283928 | orchestrator | Tuesday 03 June 2025 15:34:40 +0000 (0:00:01.533) 0:02:24.501 ********** 2025-06-03 15:38:28.283934 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:38:28.283940 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:38:28.283946 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:38:28.283952 | orchestrator | 2025-06-03 15:38:28.283958 | orchestrator | TASK [include_role : heat] ***************************************************** 2025-06-03 15:38:28.283964 | orchestrator | Tuesday 03 June 2025 15:34:42 +0000 (0:00:02.063) 0:02:26.565 ********** 2025-06-03 15:38:28.283971 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:28.283977 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:28.283989 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:28.283995 | orchestrator | 2025-06-03 15:38:28.284001 | orchestrator | TASK [include_role : horizon] ************************************************** 2025-06-03 15:38:28.284007 | orchestrator | Tuesday 03 June 2025 15:34:43 +0000 (0:00:00.342) 0:02:26.907 ********** 2025-06-03 15:38:28.284014 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:38:28.284020 | orchestrator | 2025-06-03 15:38:28.284026 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2025-06-03 15:38:28.284032 | orchestrator | Tuesday 03 June 2025 15:34:44 +0000 (0:00:00.867) 0:02:27.775 ********** 2025-06-03 15:38:28.284039 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-03 15:38:28.284059 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-03 15:38:28.284067 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-03 15:38:28.284077 | orchestrator | 2025-06-03 15:38:28.284082 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2025-06-03 15:38:28.284088 | orchestrator | Tuesday 03 June 2025 15:34:48 +0000 (0:00:03.919) 0:02:31.695 ********** 2025-06-03 15:38:28.284101 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-03 15:38:28.284113 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:28.284122 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-03 15:38:28.284129 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:28.284139 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-03 15:38:28.284149 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:28.284155 | orchestrator | 2025-06-03 15:38:28.284161 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2025-06-03 15:38:28.284166 | orchestrator | Tuesday 03 June 2025 15:34:48 +0000 (0:00:00.719) 0:02:32.414 ********** 2025-06-03 15:38:28.284172 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-06-03 15:38:28.284179 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-06-03 15:38:28.284185 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-06-03 15:38:28.284191 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-06-03 15:38:28.284197 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-06-03 15:38:28.284206 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-06-03 15:38:28.284212 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-06-03 15:38:28.284217 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:28.284226 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-06-03 15:38:28.284233 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-06-03 15:38:28.284238 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-06-03 15:38:28.284248 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-06-03 15:38:28.284253 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-06-03 15:38:28.284287 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-06-03 15:38:28.284293 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:28.284299 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-06-03 15:38:28.284305 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-06-03 15:38:28.284310 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:28.284316 | orchestrator | 2025-06-03 15:38:28.284322 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2025-06-03 15:38:28.284327 | orchestrator | Tuesday 03 June 2025 15:34:49 +0000 (0:00:00.995) 0:02:33.410 ********** 2025-06-03 15:38:28.284332 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:38:28.284338 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:38:28.284343 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:38:28.284349 | orchestrator | 2025-06-03 15:38:28.284354 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2025-06-03 15:38:28.284360 | orchestrator | Tuesday 03 June 2025 15:34:51 +0000 (0:00:01.617) 0:02:35.027 ********** 2025-06-03 15:38:28.284365 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:38:28.284371 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:38:28.284376 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:38:28.284397 | orchestrator | 2025-06-03 15:38:28.284402 | orchestrator | TASK [include_role : influxdb] ************************************************* 2025-06-03 15:38:28.284409 | orchestrator | Tuesday 03 June 2025 15:34:53 +0000 (0:00:02.012) 0:02:37.040 ********** 2025-06-03 15:38:28.284414 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:28.284419 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:28.284425 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:28.284430 | orchestrator | 2025-06-03 15:38:28.284435 | orchestrator | TASK [include_role : ironic] *************************************************** 2025-06-03 15:38:28.284441 | orchestrator | Tuesday 03 June 2025 15:34:53 +0000 (0:00:00.316) 0:02:37.357 ********** 2025-06-03 15:38:28.284446 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:28.284451 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:28.284457 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:28.284462 | orchestrator | 2025-06-03 15:38:28.284473 | orchestrator | TASK [include_role : keystone] ************************************************* 2025-06-03 15:38:28.284481 | orchestrator | Tuesday 03 June 2025 15:34:54 +0000 (0:00:00.324) 0:02:37.681 ********** 2025-06-03 15:38:28.284490 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:38:28.284498 | orchestrator | 2025-06-03 15:38:28.284506 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2025-06-03 15:38:28.284520 | orchestrator | Tuesday 03 June 2025 15:34:55 +0000 (0:00:01.172) 0:02:38.854 ********** 2025-06-03 15:38:28.284548 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-03 15:38:28.284558 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-03 15:38:28.284567 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-03 15:38:28.284576 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-03 15:38:28.284590 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-03 15:38:28.284605 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-03 15:38:28.284620 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-03 15:38:28.284629 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-03 15:38:28.284653 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-03 15:38:28.284662 | orchestrator | 2025-06-03 15:38:28.284670 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2025-06-03 15:38:28.284679 | orchestrator | Tuesday 03 June 2025 15:34:58 +0000 (0:00:03.319) 0:02:42.173 ********** 2025-06-03 15:38:28.284693 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-03 15:38:28.284709 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-03 15:38:28.284726 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-03 15:38:28.284736 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:28.284747 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-03 15:38:28.284757 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-03 15:38:28.284766 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-03 15:38:28.284775 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:28.284789 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-03 15:38:28.284857 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-03 15:38:28.284871 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-03 15:38:28.284881 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:28.284891 | orchestrator | 2025-06-03 15:38:28.284901 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2025-06-03 15:38:28.284911 | orchestrator | Tuesday 03 June 2025 15:34:59 +0000 (0:00:00.635) 0:02:42.809 ********** 2025-06-03 15:38:28.284921 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-06-03 15:38:28.284932 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-06-03 15:38:28.284942 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:28.284953 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-06-03 15:38:28.284964 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-06-03 15:38:28.284973 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:28.284979 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-06-03 15:38:28.284992 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-06-03 15:38:28.285000 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:28.285008 | orchestrator | 2025-06-03 15:38:28.285016 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2025-06-03 15:38:28.285025 | orchestrator | Tuesday 03 June 2025 15:35:00 +0000 (0:00:01.026) 0:02:43.835 ********** 2025-06-03 15:38:28.285033 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:38:28.285041 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:38:28.285050 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:38:28.285059 | orchestrator | 2025-06-03 15:38:28.285068 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2025-06-03 15:38:28.285076 | orchestrator | Tuesday 03 June 2025 15:35:01 +0000 (0:00:01.366) 0:02:45.201 ********** 2025-06-03 15:38:28.285091 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:38:28.285101 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:38:28.285109 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:38:28.285118 | orchestrator | 2025-06-03 15:38:28.285127 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2025-06-03 15:38:28.285133 | orchestrator | Tuesday 03 June 2025 15:35:03 +0000 (0:00:02.048) 0:02:47.250 ********** 2025-06-03 15:38:28.285140 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:28.285148 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:28.285156 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:28.285166 | orchestrator | 2025-06-03 15:38:28.285174 | orchestrator | TASK [include_role : magnum] *************************************************** 2025-06-03 15:38:28.285183 | orchestrator | Tuesday 03 June 2025 15:35:03 +0000 (0:00:00.325) 0:02:47.575 ********** 2025-06-03 15:38:28.285192 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:38:28.285201 | orchestrator | 2025-06-03 15:38:28.285210 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2025-06-03 15:38:28.285219 | orchestrator | Tuesday 03 June 2025 15:35:05 +0000 (0:00:01.214) 0:02:48.789 ********** 2025-06-03 15:38:28.285239 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-03 15:38:28.285247 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.285267 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-03 15:38:28.285273 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.285528 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-03 15:38:28.285544 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.285550 | orchestrator | 2025-06-03 15:38:28.285556 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2025-06-03 15:38:28.285561 | orchestrator | Tuesday 03 June 2025 15:35:08 +0000 (0:00:03.211) 0:02:52.001 ********** 2025-06-03 15:38:28.285567 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-03 15:38:28.285579 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.285585 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:28.285594 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-03 15:38:28.285604 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.285610 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:28.285615 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-03 15:38:28.285621 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.285631 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:28.285681 | orchestrator | 2025-06-03 15:38:28.285688 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2025-06-03 15:38:28.285694 | orchestrator | Tuesday 03 June 2025 15:35:08 +0000 (0:00:00.638) 0:02:52.639 ********** 2025-06-03 15:38:28.285700 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-06-03 15:38:28.285706 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-06-03 15:38:28.285712 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:28.285717 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-06-03 15:38:28.285723 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-06-03 15:38:28.285731 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:28.285740 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-06-03 15:38:28.285754 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-06-03 15:38:28.285764 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:28.285773 | orchestrator | 2025-06-03 15:38:28.285782 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2025-06-03 15:38:28.285792 | orchestrator | Tuesday 03 June 2025 15:35:10 +0000 (0:00:01.492) 0:02:54.131 ********** 2025-06-03 15:38:28.285802 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:38:28.285811 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:38:28.285821 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:38:28.285827 | orchestrator | 2025-06-03 15:38:28.285832 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2025-06-03 15:38:28.285837 | orchestrator | Tuesday 03 June 2025 15:35:11 +0000 (0:00:01.357) 0:02:55.489 ********** 2025-06-03 15:38:28.285843 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:38:28.285848 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:38:28.285853 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:38:28.285859 | orchestrator | 2025-06-03 15:38:28.285864 | orchestrator | TASK [include_role : manila] *************************************************** 2025-06-03 15:38:28.285869 | orchestrator | Tuesday 03 June 2025 15:35:14 +0000 (0:00:02.207) 0:02:57.696 ********** 2025-06-03 15:38:28.285879 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:38:28.285884 | orchestrator | 2025-06-03 15:38:28.285888 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2025-06-03 15:38:28.285930 | orchestrator | Tuesday 03 June 2025 15:35:15 +0000 (0:00:01.064) 0:02:58.760 ********** 2025-06-03 15:38:28.285936 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-06-03 15:38:28.285947 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-06-03 15:38:28.285952 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.285958 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.285966 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-06-03 15:38:28.285976 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.285985 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.285990 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.285995 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.286000 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.286008 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.286038 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.286049 | orchestrator | 2025-06-03 15:38:28.286057 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2025-06-03 15:38:28.286065 | orchestrator | Tuesday 03 June 2025 15:35:19 +0000 (0:00:03.968) 0:03:02.729 ********** 2025-06-03 15:38:28.286073 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-06-03 15:38:28.286082 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.286090 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.286099 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.286108 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:28.286120 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-06-03 15:38:28.286136 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.286143 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-06-03 15:38:28.286149 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.286155 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.286160 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.286166 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:28.286174 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.286188 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.286194 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:28.286199 | orchestrator | 2025-06-03 15:38:28.286204 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2025-06-03 15:38:28.286210 | orchestrator | Tuesday 03 June 2025 15:35:19 +0000 (0:00:00.716) 0:03:03.445 ********** 2025-06-03 15:38:28.286216 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-06-03 15:38:28.286221 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-06-03 15:38:28.286227 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:28.286233 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-06-03 15:38:28.286238 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-06-03 15:38:28.286244 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:28.286249 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-06-03 15:38:28.286255 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-06-03 15:38:28.286260 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:28.286266 | orchestrator | 2025-06-03 15:38:28.286271 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2025-06-03 15:38:28.286277 | orchestrator | Tuesday 03 June 2025 15:35:20 +0000 (0:00:01.064) 0:03:04.510 ********** 2025-06-03 15:38:28.286282 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:38:28.286287 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:38:28.286293 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:38:28.286298 | orchestrator | 2025-06-03 15:38:28.286303 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2025-06-03 15:38:28.286308 | orchestrator | Tuesday 03 June 2025 15:35:22 +0000 (0:00:01.794) 0:03:06.304 ********** 2025-06-03 15:38:28.286313 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:38:28.286317 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:38:28.286322 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:38:28.286327 | orchestrator | 2025-06-03 15:38:28.286331 | orchestrator | TASK [include_role : mariadb] ************************************************** 2025-06-03 15:38:28.286336 | orchestrator | Tuesday 03 June 2025 15:35:24 +0000 (0:00:01.991) 0:03:08.296 ********** 2025-06-03 15:38:28.286341 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:38:28.286346 | orchestrator | 2025-06-03 15:38:28.286350 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2025-06-03 15:38:28.286355 | orchestrator | Tuesday 03 June 2025 15:35:25 +0000 (0:00:00.995) 0:03:09.292 ********** 2025-06-03 15:38:28.286367 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-06-03 15:38:28.286372 | orchestrator | 2025-06-03 15:38:28.286376 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2025-06-03 15:38:28.286381 | orchestrator | Tuesday 03 June 2025 15:35:29 +0000 (0:00:03.394) 0:03:12.687 ********** 2025-06-03 15:38:28.286390 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-03 15:38:28.286397 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-06-03 15:38:28.286402 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:28.286421 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-03 15:38:28.286433 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-06-03 15:38:28.286438 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:28.286446 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': 2025-06-03 15:38:28 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:38:28.286935 | orchestrator | {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-03 15:38:28.287045 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-06-03 15:38:28.287078 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:28.287100 | orchestrator | 2025-06-03 15:38:28.287145 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2025-06-03 15:38:28.287175 | orchestrator | Tuesday 03 June 2025 15:35:31 +0000 (0:00:02.291) 0:03:14.978 ********** 2025-06-03 15:38:28.287211 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-03 15:38:28.287257 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-06-03 15:38:28.287280 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:28.287300 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-03 15:38:28.287325 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-06-03 15:38:28.287337 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:28.287365 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-03 15:38:28.287379 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-06-03 15:38:28.287390 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:28.287402 | orchestrator | 2025-06-03 15:38:28.287413 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2025-06-03 15:38:28.287424 | orchestrator | Tuesday 03 June 2025 15:35:33 +0000 (0:00:02.127) 0:03:17.106 ********** 2025-06-03 15:38:28.287435 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-06-03 15:38:28.287454 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-06-03 15:38:28.287468 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:28.287486 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-06-03 15:38:28.287499 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-06-03 15:38:28.287512 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:28.287531 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-06-03 15:38:28.287545 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-06-03 15:38:28.287558 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:28.287570 | orchestrator | 2025-06-03 15:38:28.287583 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2025-06-03 15:38:28.287595 | orchestrator | Tuesday 03 June 2025 15:35:35 +0000 (0:00:02.240) 0:03:19.346 ********** 2025-06-03 15:38:28.287614 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:38:28.287626 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:38:28.287675 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:38:28.287692 | orchestrator | 2025-06-03 15:38:28.287705 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2025-06-03 15:38:28.287718 | orchestrator | Tuesday 03 June 2025 15:35:37 +0000 (0:00:01.907) 0:03:21.253 ********** 2025-06-03 15:38:28.287731 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:28.287744 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:28.287756 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:28.287768 | orchestrator | 2025-06-03 15:38:28.287782 | orchestrator | TASK [include_role : masakari] ************************************************* 2025-06-03 15:38:28.287794 | orchestrator | Tuesday 03 June 2025 15:35:38 +0000 (0:00:01.245) 0:03:22.498 ********** 2025-06-03 15:38:28.287807 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:28.287819 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:28.287832 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:28.287844 | orchestrator | 2025-06-03 15:38:28.287857 | orchestrator | TASK [include_role : memcached] ************************************************ 2025-06-03 15:38:28.287868 | orchestrator | Tuesday 03 June 2025 15:35:39 +0000 (0:00:00.266) 0:03:22.765 ********** 2025-06-03 15:38:28.287880 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:38:28.287891 | orchestrator | 2025-06-03 15:38:28.287902 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2025-06-03 15:38:28.287913 | orchestrator | Tuesday 03 June 2025 15:35:40 +0000 (0:00:01.007) 0:03:23.773 ********** 2025-06-03 15:38:28.287926 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-06-03 15:38:28.287946 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-06-03 15:38:28.287967 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-06-03 15:38:28.287980 | orchestrator | 2025-06-03 15:38:28.287999 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2025-06-03 15:38:28.288010 | orchestrator | Tuesday 03 June 2025 15:35:41 +0000 (0:00:01.800) 0:03:25.574 ********** 2025-06-03 15:38:28.288022 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-06-03 15:38:28.288034 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-06-03 15:38:28.288046 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:28.288057 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:28.288069 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-06-03 15:38:28.288080 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:28.288091 | orchestrator | 2025-06-03 15:38:28.288103 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2025-06-03 15:38:28.288119 | orchestrator | Tuesday 03 June 2025 15:35:42 +0000 (0:00:00.399) 0:03:25.974 ********** 2025-06-03 15:38:28.288132 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-06-03 15:38:28.288145 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:28.288157 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-06-03 15:38:28.288168 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:28.288186 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-06-03 15:38:28.288207 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:28.288218 | orchestrator | 2025-06-03 15:38:28.288229 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2025-06-03 15:38:28.288240 | orchestrator | Tuesday 03 June 2025 15:35:42 +0000 (0:00:00.581) 0:03:26.556 ********** 2025-06-03 15:38:28.288251 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:28.288263 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:28.288274 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:28.288286 | orchestrator | 2025-06-03 15:38:28.288297 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2025-06-03 15:38:28.288308 | orchestrator | Tuesday 03 June 2025 15:35:43 +0000 (0:00:00.742) 0:03:27.298 ********** 2025-06-03 15:38:28.288320 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:28.288331 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:28.288342 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:28.288353 | orchestrator | 2025-06-03 15:38:28.288363 | orchestrator | TASK [include_role : mistral] ************************************************** 2025-06-03 15:38:28.288374 | orchestrator | Tuesday 03 June 2025 15:35:44 +0000 (0:00:01.287) 0:03:28.586 ********** 2025-06-03 15:38:28.288385 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:28.288396 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:28.288408 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:28.288419 | orchestrator | 2025-06-03 15:38:28.288429 | orchestrator | TASK [include_role : neutron] ************************************************** 2025-06-03 15:38:28.288441 | orchestrator | Tuesday 03 June 2025 15:35:45 +0000 (0:00:00.338) 0:03:28.924 ********** 2025-06-03 15:38:28.288453 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:38:28.288464 | orchestrator | 2025-06-03 15:38:28.288475 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2025-06-03 15:38:28.288486 | orchestrator | Tuesday 03 June 2025 15:35:46 +0000 (0:00:01.470) 0:03:30.395 ********** 2025-06-03 15:38:28.288498 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-03 15:38:28.288512 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.288529 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.288558 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.288571 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-06-03 15:38:28.288583 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.288596 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-03 15:38:28.288609 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-03 15:38:28.288627 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.288688 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-03 15:38:28.288703 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.288715 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-06-03 15:38:28.288727 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-03 15:38:28.288739 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-03 15:38:28.288757 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.288776 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.288795 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-06-03 15:38:28.288808 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-06-03 15:38:28.288819 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.288831 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.288856 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.288876 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-06-03 15:38:28.288895 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.288916 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-03 15:38:28.288935 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-03 15:38:28.288955 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-03 15:38:28.288992 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.289023 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.289044 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-03 15:38:28.289063 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.289083 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.289103 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.289131 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-06-03 15:38:28.289152 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-03 15:38:28.289166 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-06-03 15:38:28.289177 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.289189 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.289201 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-06-03 15:38:28.289224 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-03 15:38:28.289238 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-03 15:38:28.289257 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-06-03 15:38:28.289269 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.289281 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.289292 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-03 15:38:28.289316 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.289329 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-06-03 15:38:28.289347 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-03 15:38:28.289358 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.289370 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-06-03 15:38:28.289381 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-06-03 15:38:28.289404 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.289416 | orchestrator | 2025-06-03 15:38:28.289428 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2025-06-03 15:38:28.289439 | orchestrator | Tuesday 03 June 2025 15:35:51 +0000 (0:00:04.715) 0:03:35.110 ********** 2025-06-03 15:38:28.289457 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-03 15:38:28.289469 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.289481 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.289503 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.289519 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-06-03 15:38:28.289531 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.289549 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-03 15:38:28.289561 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-03 15:38:28.289572 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-03 15:38:28.289590 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.289602 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.289625 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-03 15:38:28.290628 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.290739 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.290768 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-03 15:38:28.290814 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-06-03 15:38:28.290841 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.290853 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.290881 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-03 15:38:28.290893 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-06-03 15:38:28.290913 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.290925 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.290943 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-06-03 15:38:28.290963 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.290975 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.290987 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-03 15:38:28.291083 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-06-03 15:38:28.291098 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-06-03 15:38:28.291115 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.291128 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-03 15:38:28.291147 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.291160 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:28.291173 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.291194 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-03 15:38:28.291208 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-03 15:38:28.291221 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-03 15:38:28.291239 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.291259 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.291272 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-06-03 15:38:28.291284 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-03 15:38:28.291304 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-03 15:38:28.291318 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.291339 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.291352 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-06-03 15:38:28.291372 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-06-03 15:38:28.291393 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-03 15:38:28.291407 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-06-03 15:38:28.291420 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.291474 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.291532 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-06-03 15:38:28.291546 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:28.291558 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-06-03 15:38:28.291577 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.291588 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:28.291600 | orchestrator | 2025-06-03 15:38:28.291612 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2025-06-03 15:38:28.291624 | orchestrator | Tuesday 03 June 2025 15:35:52 +0000 (0:00:01.356) 0:03:36.467 ********** 2025-06-03 15:38:28.291658 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-06-03 15:38:28.291671 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-06-03 15:38:28.291682 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:28.291694 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-06-03 15:38:28.291705 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-06-03 15:38:28.291716 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:28.291727 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-06-03 15:38:28.291738 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-06-03 15:38:28.291749 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:28.291760 | orchestrator | 2025-06-03 15:38:28.291771 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2025-06-03 15:38:28.291782 | orchestrator | Tuesday 03 June 2025 15:35:54 +0000 (0:00:01.814) 0:03:38.281 ********** 2025-06-03 15:38:28.291826 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:38:28.291840 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:38:28.291896 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:38:28.291909 | orchestrator | 2025-06-03 15:38:28.291921 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2025-06-03 15:38:28.291969 | orchestrator | Tuesday 03 June 2025 15:35:56 +0000 (0:00:01.439) 0:03:39.720 ********** 2025-06-03 15:38:28.291982 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:38:28.291993 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:38:28.292004 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:38:28.292014 | orchestrator | 2025-06-03 15:38:28.292025 | orchestrator | TASK [include_role : placement] ************************************************ 2025-06-03 15:38:28.292045 | orchestrator | Tuesday 03 June 2025 15:35:58 +0000 (0:00:02.146) 0:03:41.867 ********** 2025-06-03 15:38:28.292056 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:38:28.292067 | orchestrator | 2025-06-03 15:38:28.292078 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2025-06-03 15:38:28.292089 | orchestrator | Tuesday 03 June 2025 15:35:59 +0000 (0:00:01.228) 0:03:43.096 ********** 2025-06-03 15:38:28.292109 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-03 15:38:28.292123 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-03 15:38:28.292134 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-03 15:38:28.292146 | orchestrator | 2025-06-03 15:38:28.292158 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2025-06-03 15:38:28.292169 | orchestrator | Tuesday 03 June 2025 15:36:02 +0000 (0:00:03.425) 0:03:46.522 ********** 2025-06-03 15:38:28.292185 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-03 15:38:28.292205 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:28.292563 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-03 15:38:28.292584 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:28.292596 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-03 15:38:28.292608 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:28.292619 | orchestrator | 2025-06-03 15:38:28.292630 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2025-06-03 15:38:28.292673 | orchestrator | Tuesday 03 June 2025 15:36:03 +0000 (0:00:00.713) 0:03:47.236 ********** 2025-06-03 15:38:28.292685 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-06-03 15:38:28.292697 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-06-03 15:38:28.292729 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:28.292741 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-06-03 15:38:28.292753 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-06-03 15:38:28.292765 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:28.292776 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-06-03 15:38:28.292787 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-06-03 15:38:28.292852 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:28.292867 | orchestrator | 2025-06-03 15:38:28.292878 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2025-06-03 15:38:28.292896 | orchestrator | Tuesday 03 June 2025 15:36:04 +0000 (0:00:00.787) 0:03:48.024 ********** 2025-06-03 15:38:28.292907 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:38:28.292919 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:38:28.292930 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:38:28.292940 | orchestrator | 2025-06-03 15:38:28.292952 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2025-06-03 15:38:28.292963 | orchestrator | Tuesday 03 June 2025 15:36:06 +0000 (0:00:01.837) 0:03:49.861 ********** 2025-06-03 15:38:28.292974 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:38:28.292985 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:38:28.292996 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:38:28.293007 | orchestrator | 2025-06-03 15:38:28.293017 | orchestrator | TASK [include_role : nova] ***************************************************** 2025-06-03 15:38:28.293028 | orchestrator | Tuesday 03 June 2025 15:36:08 +0000 (0:00:01.921) 0:03:51.782 ********** 2025-06-03 15:38:28.293039 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:38:28.293050 | orchestrator | 2025-06-03 15:38:28.293061 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2025-06-03 15:38:28.293072 | orchestrator | Tuesday 03 June 2025 15:36:09 +0000 (0:00:00.983) 0:03:52.766 ********** 2025-06-03 15:38:28.293096 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-03 15:38:28.293110 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.293123 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.293144 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-03 15:38:28.293163 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.293178 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.293382 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-03 15:38:28.293415 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.293440 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.293453 | orchestrator | 2025-06-03 15:38:28.293467 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2025-06-03 15:38:28.293485 | orchestrator | Tuesday 03 June 2025 15:36:12 +0000 (0:00:03.782) 0:03:56.548 ********** 2025-06-03 15:38:28.293509 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-03 15:38:28.293525 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.293536 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.293548 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:28.293561 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-03 15:38:28.293584 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.293596 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.293694 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:28.293722 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-03 15:38:28.293762 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.293784 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.293795 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:28.293806 | orchestrator | 2025-06-03 15:38:28.293818 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2025-06-03 15:38:28.293829 | orchestrator | Tuesday 03 June 2025 15:36:13 +0000 (0:00:00.847) 0:03:57.396 ********** 2025-06-03 15:38:28.293840 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-06-03 15:38:28.293853 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-06-03 15:38:28.293871 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-06-03 15:38:28.293883 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-06-03 15:38:28.293894 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:28.293905 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-06-03 15:38:28.293916 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-06-03 15:38:28.293948 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-06-03 15:38:28.293960 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-06-03 15:38:28.293971 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-06-03 15:38:28.293982 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:28.294101 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-06-03 15:38:28.294115 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-06-03 15:38:28.294125 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-06-03 15:38:28.294144 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:28.294154 | orchestrator | 2025-06-03 15:38:28.294221 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2025-06-03 15:38:28.294233 | orchestrator | Tuesday 03 June 2025 15:36:14 +0000 (0:00:00.769) 0:03:58.165 ********** 2025-06-03 15:38:28.294249 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:38:28.294266 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:38:28.294283 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:38:28.294297 | orchestrator | 2025-06-03 15:38:28.294311 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2025-06-03 15:38:28.294326 | orchestrator | Tuesday 03 June 2025 15:36:16 +0000 (0:00:01.750) 0:03:59.916 ********** 2025-06-03 15:38:28.294341 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:38:28.294355 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:38:28.294369 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:38:28.294383 | orchestrator | 2025-06-03 15:38:28.294400 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2025-06-03 15:38:28.294419 | orchestrator | Tuesday 03 June 2025 15:36:18 +0000 (0:00:02.090) 0:04:02.006 ********** 2025-06-03 15:38:28.294431 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:38:28.294441 | orchestrator | 2025-06-03 15:38:28.294484 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2025-06-03 15:38:28.294494 | orchestrator | Tuesday 03 June 2025 15:36:19 +0000 (0:00:01.514) 0:04:03.521 ********** 2025-06-03 15:38:28.294504 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2025-06-03 15:38:28.294514 | orchestrator | 2025-06-03 15:38:28.294524 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2025-06-03 15:38:28.294533 | orchestrator | Tuesday 03 June 2025 15:36:21 +0000 (0:00:01.343) 0:04:04.865 ********** 2025-06-03 15:38:28.294545 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-06-03 15:38:28.294563 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-06-03 15:38:28.294574 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-06-03 15:38:28.294585 | orchestrator | 2025-06-03 15:38:28.294602 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2025-06-03 15:38:28.294613 | orchestrator | Tuesday 03 June 2025 15:36:25 +0000 (0:00:03.871) 0:04:08.737 ********** 2025-06-03 15:38:28.294672 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-03 15:38:28.294694 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:28.294705 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-03 15:38:28.294715 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:28.294725 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-03 15:38:28.294735 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:28.294745 | orchestrator | 2025-06-03 15:38:28.294755 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2025-06-03 15:38:28.294764 | orchestrator | Tuesday 03 June 2025 15:36:26 +0000 (0:00:01.409) 0:04:10.146 ********** 2025-06-03 15:38:28.294774 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-06-03 15:38:28.294785 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-06-03 15:38:28.294796 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:28.294805 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-06-03 15:38:28.294816 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-06-03 15:38:28.294826 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:28.294870 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-06-03 15:38:28.294884 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-06-03 15:38:28.294894 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:28.294904 | orchestrator | 2025-06-03 15:38:28.294914 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-06-03 15:38:28.294924 | orchestrator | Tuesday 03 June 2025 15:36:28 +0000 (0:00:01.955) 0:04:12.102 ********** 2025-06-03 15:38:28.294940 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:38:28.294950 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:38:28.294960 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:38:28.294969 | orchestrator | 2025-06-03 15:38:28.294979 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-06-03 15:38:28.294994 | orchestrator | Tuesday 03 June 2025 15:36:31 +0000 (0:00:02.680) 0:04:14.783 ********** 2025-06-03 15:38:28.295009 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:38:28.295025 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:38:28.295041 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:38:28.295057 | orchestrator | 2025-06-03 15:38:28.295083 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2025-06-03 15:38:28.295123 | orchestrator | Tuesday 03 June 2025 15:36:34 +0000 (0:00:02.885) 0:04:17.669 ********** 2025-06-03 15:38:28.295136 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2025-06-03 15:38:28.295147 | orchestrator | 2025-06-03 15:38:28.295157 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2025-06-03 15:38:28.295167 | orchestrator | Tuesday 03 June 2025 15:36:34 +0000 (0:00:00.728) 0:04:18.397 ********** 2025-06-03 15:38:28.295177 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-03 15:38:28.295189 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:28.295209 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-03 15:38:28.295220 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:28.295230 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-03 15:38:28.295241 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:28.295251 | orchestrator | 2025-06-03 15:38:28.295260 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2025-06-03 15:38:28.295270 | orchestrator | Tuesday 03 June 2025 15:36:35 +0000 (0:00:01.067) 0:04:19.464 ********** 2025-06-03 15:38:28.295311 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-03 15:38:28.295331 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:28.295346 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-03 15:38:28.295357 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:28.295367 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-03 15:38:28.295377 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:28.295387 | orchestrator | 2025-06-03 15:38:28.295403 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2025-06-03 15:38:28.295413 | orchestrator | Tuesday 03 June 2025 15:36:37 +0000 (0:00:01.385) 0:04:20.850 ********** 2025-06-03 15:38:28.295423 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:28.295433 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:28.295443 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:28.295453 | orchestrator | 2025-06-03 15:38:28.295463 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-06-03 15:38:28.295477 | orchestrator | Tuesday 03 June 2025 15:36:38 +0000 (0:00:01.108) 0:04:21.959 ********** 2025-06-03 15:38:28.295494 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:38:28.295509 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:38:28.295524 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:38:28.295542 | orchestrator | 2025-06-03 15:38:28.295559 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-06-03 15:38:28.295611 | orchestrator | Tuesday 03 June 2025 15:36:40 +0000 (0:00:02.640) 0:04:24.599 ********** 2025-06-03 15:38:28.295625 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:38:28.295695 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:38:28.295711 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:38:28.295733 | orchestrator | 2025-06-03 15:38:28.295743 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2025-06-03 15:38:28.295753 | orchestrator | Tuesday 03 June 2025 15:36:43 +0000 (0:00:02.699) 0:04:27.298 ********** 2025-06-03 15:38:28.295763 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2025-06-03 15:38:28.295773 | orchestrator | 2025-06-03 15:38:28.295783 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2025-06-03 15:38:28.295793 | orchestrator | Tuesday 03 June 2025 15:36:44 +0000 (0:00:00.983) 0:04:28.282 ********** 2025-06-03 15:38:28.295843 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-06-03 15:38:28.295854 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:28.295865 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-06-03 15:38:28.295889 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:28.295899 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-06-03 15:38:28.295909 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:28.295919 | orchestrator | 2025-06-03 15:38:28.295929 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2025-06-03 15:38:28.295945 | orchestrator | Tuesday 03 June 2025 15:36:45 +0000 (0:00:00.916) 0:04:29.199 ********** 2025-06-03 15:38:28.295955 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-06-03 15:38:28.295966 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:28.295985 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-06-03 15:38:28.295996 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:28.296006 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-06-03 15:38:28.296017 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:28.296026 | orchestrator | 2025-06-03 15:38:28.296036 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2025-06-03 15:38:28.296046 | orchestrator | Tuesday 03 June 2025 15:36:46 +0000 (0:00:01.089) 0:04:30.288 ********** 2025-06-03 15:38:28.296083 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:28.296094 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:28.296104 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:28.296113 | orchestrator | 2025-06-03 15:38:28.296123 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-06-03 15:38:28.296133 | orchestrator | Tuesday 03 June 2025 15:36:48 +0000 (0:00:01.788) 0:04:32.077 ********** 2025-06-03 15:38:28.296143 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:38:28.296160 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:38:28.296170 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:38:28.296206 | orchestrator | 2025-06-03 15:38:28.296217 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-06-03 15:38:28.296246 | orchestrator | Tuesday 03 June 2025 15:36:50 +0000 (0:00:02.278) 0:04:34.355 ********** 2025-06-03 15:38:28.296254 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:38:28.296262 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:38:28.296270 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:38:28.296278 | orchestrator | 2025-06-03 15:38:28.296286 | orchestrator | TASK [include_role : octavia] ************************************************** 2025-06-03 15:38:28.296294 | orchestrator | Tuesday 03 June 2025 15:36:53 +0000 (0:00:02.768) 0:04:37.124 ********** 2025-06-03 15:38:28.296302 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:38:28.296311 | orchestrator | 2025-06-03 15:38:28.296319 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2025-06-03 15:38:28.296326 | orchestrator | Tuesday 03 June 2025 15:36:54 +0000 (0:00:01.216) 0:04:38.341 ********** 2025-06-03 15:38:28.296335 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-03 15:38:28.296349 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-03 15:38:28.296358 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-03 15:38:28.296373 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-03 15:38:28.296382 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.296404 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-03 15:38:28.296426 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-03 15:38:28.296446 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-03 15:38:28.296459 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-03 15:38:28.296480 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.296495 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-03 15:38:28.296518 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-03 15:38:28.296602 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-03 15:38:28.296663 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-03 15:38:28.296689 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.296704 | orchestrator | 2025-06-03 15:38:28.296713 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2025-06-03 15:38:28.296721 | orchestrator | Tuesday 03 June 2025 15:36:58 +0000 (0:00:03.441) 0:04:41.782 ********** 2025-06-03 15:38:28.296738 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-06-03 15:38:28.296754 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-03 15:38:28.296763 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-03 15:38:28.296771 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-03 15:38:28.296779 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.296788 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:28.296801 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-06-03 15:38:28.296815 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-03 15:38:28.296828 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-03 15:38:28.296837 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-03 15:38:28.296845 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.296854 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:28.296862 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-06-03 15:38:28.296875 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-03 15:38:28.296887 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-03 15:38:28.296901 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-03 15:38:28.296909 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-03 15:38:28.296917 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:28.296925 | orchestrator | 2025-06-03 15:38:28.296934 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2025-06-03 15:38:28.296942 | orchestrator | Tuesday 03 June 2025 15:36:58 +0000 (0:00:00.735) 0:04:42.518 ********** 2025-06-03 15:38:28.296950 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-06-03 15:38:28.296959 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-06-03 15:38:28.296967 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:28.297002 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-06-03 15:38:28.297011 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-06-03 15:38:28.297019 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:28.297027 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-06-03 15:38:28.297036 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-06-03 15:38:28.297044 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:28.297052 | orchestrator | 2025-06-03 15:38:28.297060 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2025-06-03 15:38:28.297068 | orchestrator | Tuesday 03 June 2025 15:36:59 +0000 (0:00:01.034) 0:04:43.552 ********** 2025-06-03 15:38:28.297081 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:38:28.297089 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:38:28.297097 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:38:28.297105 | orchestrator | 2025-06-03 15:38:28.297113 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2025-06-03 15:38:28.297127 | orchestrator | Tuesday 03 June 2025 15:37:01 +0000 (0:00:01.585) 0:04:45.138 ********** 2025-06-03 15:38:28.297135 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:38:28.297143 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:38:28.297151 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:38:28.297159 | orchestrator | 2025-06-03 15:38:28.297167 | orchestrator | TASK [include_role : opensearch] *********************************************** 2025-06-03 15:38:28.297175 | orchestrator | Tuesday 03 June 2025 15:37:03 +0000 (0:00:01.909) 0:04:47.047 ********** 2025-06-03 15:38:28.297182 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:38:28.297190 | orchestrator | 2025-06-03 15:38:28.297198 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2025-06-03 15:38:28.297206 | orchestrator | Tuesday 03 June 2025 15:37:04 +0000 (0:00:01.227) 0:04:48.274 ********** 2025-06-03 15:38:28.297220 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-03 15:38:28.297229 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-03 15:38:28.297238 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-03 15:38:28.297254 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-03 15:38:28.297750 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-03 15:38:28.297773 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-03 15:38:28.297782 | orchestrator | 2025-06-03 15:38:28.297790 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2025-06-03 15:38:28.297799 | orchestrator | Tuesday 03 June 2025 15:37:09 +0000 (0:00:04.916) 0:04:53.191 ********** 2025-06-03 15:38:28.297807 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-03 15:38:28.297823 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-03 15:38:28.297840 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:28.297868 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-03 15:38:28.297878 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-03 15:38:28.297887 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:28.297895 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-03 15:38:28.297908 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-03 15:38:28.297922 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:28.297930 | orchestrator | 2025-06-03 15:38:28.297938 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2025-06-03 15:38:28.297946 | orchestrator | Tuesday 03 June 2025 15:37:10 +0000 (0:00:00.906) 0:04:54.098 ********** 2025-06-03 15:38:28.297955 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-06-03 15:38:28.297963 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-06-03 15:38:28.297994 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-06-03 15:38:28.298004 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:28.298042 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-06-03 15:38:28.298052 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-06-03 15:38:28.298061 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-06-03 15:38:28.298068 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:28.298077 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-06-03 15:38:28.298085 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-06-03 15:38:28.298093 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-06-03 15:38:28.298101 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:28.298109 | orchestrator | 2025-06-03 15:38:28.298117 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2025-06-03 15:38:28.298125 | orchestrator | Tuesday 03 June 2025 15:37:11 +0000 (0:00:00.794) 0:04:54.892 ********** 2025-06-03 15:38:28.298132 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:28.298140 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:28.298148 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:28.298156 | orchestrator | 2025-06-03 15:38:28.298165 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2025-06-03 15:38:28.298179 | orchestrator | Tuesday 03 June 2025 15:37:11 +0000 (0:00:00.385) 0:04:55.278 ********** 2025-06-03 15:38:28.298187 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:28.298195 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:28.298203 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:28.298211 | orchestrator | 2025-06-03 15:38:28.298219 | orchestrator | TASK [include_role : prometheus] *********************************************** 2025-06-03 15:38:28.298226 | orchestrator | Tuesday 03 June 2025 15:37:12 +0000 (0:00:01.153) 0:04:56.431 ********** 2025-06-03 15:38:28.298234 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:38:28.298242 | orchestrator | 2025-06-03 15:38:28.298250 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2025-06-03 15:38:28.298257 | orchestrator | Tuesday 03 June 2025 15:37:14 +0000 (0:00:01.471) 0:04:57.903 ********** 2025-06-03 15:38:28.298271 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-06-03 15:38:28.298280 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-03 15:38:28.298310 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:38:28.298320 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:38:28.298330 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-03 15:38:28.298340 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-06-03 15:38:28.298356 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-03 15:38:28.298366 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:38:28.298380 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:38:28.298390 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-03 15:38:28.298419 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-06-03 15:38:28.298430 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-03 15:38:28.298445 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:38:28.298455 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:38:28.298464 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-03 15:38:28.298478 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-06-03 15:38:28.298493 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-06-03 15:38:28.298503 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:38:28.298517 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:38:28.298527 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-03 15:38:28.298536 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-06-03 15:38:28.298550 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-06-03 15:38:28.298565 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:38:28.298575 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:38:28.298590 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-03 15:38:28.298600 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-06-03 15:38:28.298610 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-06-03 15:38:28.298626 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:38:28.298690 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:38:28.298702 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-03 15:38:28.298711 | orchestrator | 2025-06-03 15:38:28.298720 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2025-06-03 15:38:28.298735 | orchestrator | Tuesday 03 June 2025 15:37:18 +0000 (0:00:03.863) 0:05:01.766 ********** 2025-06-03 15:38:28.298744 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-06-03 15:38:28.298752 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-03 15:38:28.298761 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:38:28.298769 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:38:28.298782 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-03 15:38:28.298796 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-06-03 15:38:28.298810 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-06-03 15:38:28.298819 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:38:28.298827 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:38:28.298835 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-03 15:38:28.298843 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:28.298856 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-06-03 15:38:28.298868 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-03 15:38:28.298878 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:38:28.298891 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:38:28.298900 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-03 15:38:28.298908 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-06-03 15:38:28.298921 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-06-03 15:38:28.298930 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:38:28.298943 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-06-03 15:38:28.298961 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:38:28.298970 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-03 15:38:28.298978 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-03 15:38:28.298986 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:38:28.298994 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:28.299002 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:38:28.299014 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-03 15:38:28.299028 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-06-03 15:38:28.299042 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-06-03 15:38:28.299050 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:38:28.299059 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:38:28.299067 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-03 15:38:28.299075 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:28.299083 | orchestrator | 2025-06-03 15:38:28.299092 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2025-06-03 15:38:28.299100 | orchestrator | Tuesday 03 June 2025 15:37:19 +0000 (0:00:01.180) 0:05:02.946 ********** 2025-06-03 15:38:28.299112 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-06-03 15:38:28.299120 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-06-03 15:38:28.299133 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-06-03 15:38:28.299146 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-06-03 15:38:28.299155 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:28.299163 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-06-03 15:38:28.299175 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-06-03 15:38:28.299184 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-06-03 15:38:28.299192 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-06-03 15:38:28.299200 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:28.299208 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-06-03 15:38:28.299216 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-06-03 15:38:28.299224 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-06-03 15:38:28.299232 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-06-03 15:38:28.299240 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:28.299248 | orchestrator | 2025-06-03 15:38:28.299256 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2025-06-03 15:38:28.299264 | orchestrator | Tuesday 03 June 2025 15:37:20 +0000 (0:00:01.003) 0:05:03.950 ********** 2025-06-03 15:38:28.299272 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:28.299279 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:28.299285 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:28.299292 | orchestrator | 2025-06-03 15:38:28.299299 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2025-06-03 15:38:28.299305 | orchestrator | Tuesday 03 June 2025 15:37:20 +0000 (0:00:00.431) 0:05:04.382 ********** 2025-06-03 15:38:28.299312 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:28.299319 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:28.299325 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:28.299337 | orchestrator | 2025-06-03 15:38:28.299343 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2025-06-03 15:38:28.299350 | orchestrator | Tuesday 03 June 2025 15:37:22 +0000 (0:00:01.733) 0:05:06.116 ********** 2025-06-03 15:38:28.299357 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:38:28.299364 | orchestrator | 2025-06-03 15:38:28.299370 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2025-06-03 15:38:28.299380 | orchestrator | Tuesday 03 June 2025 15:37:24 +0000 (0:00:01.714) 0:05:07.830 ********** 2025-06-03 15:38:28.299391 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-03 15:38:28.299399 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-03 15:38:28.299407 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-03 15:38:28.299415 | orchestrator | 2025-06-03 15:38:28.299421 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2025-06-03 15:38:28.299428 | orchestrator | Tuesday 03 June 2025 15:37:26 +0000 (0:00:02.504) 0:05:10.335 ********** 2025-06-03 15:38:28.299438 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-06-03 15:38:28.299452 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:28.299463 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-06-03 15:38:28.299470 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:28.299477 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-06-03 15:38:28.299484 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:28.299491 | orchestrator | 2025-06-03 15:38:28.299498 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2025-06-03 15:38:28.299504 | orchestrator | Tuesday 03 June 2025 15:37:27 +0000 (0:00:00.375) 0:05:10.710 ********** 2025-06-03 15:38:28.299512 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-06-03 15:38:28.299518 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:28.299525 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-06-03 15:38:28.299532 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:28.299539 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-06-03 15:38:28.299545 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:28.299552 | orchestrator | 2025-06-03 15:38:28.299559 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2025-06-03 15:38:28.299569 | orchestrator | Tuesday 03 June 2025 15:37:28 +0000 (0:00:01.022) 0:05:11.732 ********** 2025-06-03 15:38:28.299576 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:28.299583 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:28.299590 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:28.299596 | orchestrator | 2025-06-03 15:38:28.299603 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2025-06-03 15:38:28.299610 | orchestrator | Tuesday 03 June 2025 15:37:28 +0000 (0:00:00.444) 0:05:12.177 ********** 2025-06-03 15:38:28.299617 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:28.299623 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:28.299630 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:28.299653 | orchestrator | 2025-06-03 15:38:28.299664 | orchestrator | TASK [include_role : skyline] ************************************************** 2025-06-03 15:38:28.299675 | orchestrator | Tuesday 03 June 2025 15:37:29 +0000 (0:00:01.340) 0:05:13.517 ********** 2025-06-03 15:38:28.299687 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:38:28.299698 | orchestrator | 2025-06-03 15:38:28.299709 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2025-06-03 15:38:28.299719 | orchestrator | Tuesday 03 June 2025 15:37:31 +0000 (0:00:01.701) 0:05:15.219 ********** 2025-06-03 15:38:28.299730 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-06-03 15:38:28.299743 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-06-03 15:38:28.299751 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-06-03 15:38:28.299763 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-06-03 15:38:28.299771 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-06-03 15:38:28.299782 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-06-03 15:38:28.299789 | orchestrator | 2025-06-03 15:38:28.299796 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2025-06-03 15:38:28.299803 | orchestrator | Tuesday 03 June 2025 15:37:37 +0000 (0:00:06.223) 0:05:21.442 ********** 2025-06-03 15:38:28.299862 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-06-03 15:38:28.299882 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-06-03 15:38:28.299889 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:28.299899 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-06-03 15:38:28.299911 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-06-03 15:38:28.299918 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:28.299925 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-06-03 15:38:28.299937 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-06-03 15:38:28.299944 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:28.299950 | orchestrator | 2025-06-03 15:38:28.299957 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2025-06-03 15:38:28.299964 | orchestrator | Tuesday 03 June 2025 15:37:38 +0000 (0:00:00.654) 0:05:22.097 ********** 2025-06-03 15:38:28.299971 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-06-03 15:38:28.299978 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-06-03 15:38:28.299985 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-06-03 15:38:28.299992 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-06-03 15:38:28.299999 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:28.300009 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-06-03 15:38:28.300015 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-06-03 15:38:28.300022 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-06-03 15:38:28.300029 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-06-03 15:38:28.300036 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:28.300047 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-06-03 15:38:28.300054 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-06-03 15:38:28.300061 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-06-03 15:38:28.300072 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-06-03 15:38:28.300079 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:28.300086 | orchestrator | 2025-06-03 15:38:28.300092 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2025-06-03 15:38:28.300099 | orchestrator | Tuesday 03 June 2025 15:37:40 +0000 (0:00:01.726) 0:05:23.823 ********** 2025-06-03 15:38:28.300106 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:38:28.300113 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:38:28.300120 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:38:28.300126 | orchestrator | 2025-06-03 15:38:28.300133 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2025-06-03 15:38:28.300140 | orchestrator | Tuesday 03 June 2025 15:37:41 +0000 (0:00:01.322) 0:05:25.145 ********** 2025-06-03 15:38:28.300147 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:38:28.300153 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:38:28.300160 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:38:28.300167 | orchestrator | 2025-06-03 15:38:28.300173 | orchestrator | TASK [include_role : swift] **************************************************** 2025-06-03 15:38:28.300180 | orchestrator | Tuesday 03 June 2025 15:37:43 +0000 (0:00:02.218) 0:05:27.364 ********** 2025-06-03 15:38:28.300187 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:28.300193 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:28.300200 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:28.300207 | orchestrator | 2025-06-03 15:38:28.300213 | orchestrator | TASK [include_role : tacker] *************************************************** 2025-06-03 15:38:28.300220 | orchestrator | Tuesday 03 June 2025 15:37:44 +0000 (0:00:00.344) 0:05:27.709 ********** 2025-06-03 15:38:28.300227 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:28.300233 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:28.300240 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:28.300247 | orchestrator | 2025-06-03 15:38:28.300254 | orchestrator | TASK [include_role : trove] **************************************************** 2025-06-03 15:38:28.300261 | orchestrator | Tuesday 03 June 2025 15:37:44 +0000 (0:00:00.632) 0:05:28.342 ********** 2025-06-03 15:38:28.300267 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:28.300274 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:28.300281 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:28.300287 | orchestrator | 2025-06-03 15:38:28.300294 | orchestrator | TASK [include_role : venus] **************************************************** 2025-06-03 15:38:28.300301 | orchestrator | Tuesday 03 June 2025 15:37:44 +0000 (0:00:00.307) 0:05:28.650 ********** 2025-06-03 15:38:28.300308 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:28.300314 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:28.300321 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:28.300328 | orchestrator | 2025-06-03 15:38:28.300335 | orchestrator | TASK [include_role : watcher] ************************************************** 2025-06-03 15:38:28.300342 | orchestrator | Tuesday 03 June 2025 15:37:45 +0000 (0:00:00.318) 0:05:28.968 ********** 2025-06-03 15:38:28.300348 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:28.300355 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:28.300361 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:28.300368 | orchestrator | 2025-06-03 15:38:28.300375 | orchestrator | TASK [include_role : zun] ****************************************************** 2025-06-03 15:38:28.300382 | orchestrator | Tuesday 03 June 2025 15:37:45 +0000 (0:00:00.305) 0:05:29.274 ********** 2025-06-03 15:38:28.300388 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:28.300395 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:28.300402 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:28.300412 | orchestrator | 2025-06-03 15:38:28.300422 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2025-06-03 15:38:28.300429 | orchestrator | Tuesday 03 June 2025 15:37:46 +0000 (0:00:00.852) 0:05:30.126 ********** 2025-06-03 15:38:28.300436 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:38:28.300443 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:38:28.300449 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:38:28.300456 | orchestrator | 2025-06-03 15:38:28.300462 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2025-06-03 15:38:28.300469 | orchestrator | Tuesday 03 June 2025 15:37:47 +0000 (0:00:00.688) 0:05:30.814 ********** 2025-06-03 15:38:28.300476 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:38:28.300482 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:38:28.300489 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:38:28.300496 | orchestrator | 2025-06-03 15:38:28.300502 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2025-06-03 15:38:28.300509 | orchestrator | Tuesday 03 June 2025 15:37:47 +0000 (0:00:00.368) 0:05:31.183 ********** 2025-06-03 15:38:28.300516 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:38:28.300522 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:38:28.300529 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:38:28.300536 | orchestrator | 2025-06-03 15:38:28.300542 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2025-06-03 15:38:28.300549 | orchestrator | Tuesday 03 June 2025 15:37:48 +0000 (0:00:01.291) 0:05:32.474 ********** 2025-06-03 15:38:28.300556 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:38:28.300562 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:38:28.300574 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:38:28.300581 | orchestrator | 2025-06-03 15:38:28.300634 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2025-06-03 15:38:28.300663 | orchestrator | Tuesday 03 June 2025 15:37:49 +0000 (0:00:00.876) 0:05:33.351 ********** 2025-06-03 15:38:28.300674 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:38:28.300681 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:38:28.300687 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:38:28.300694 | orchestrator | 2025-06-03 15:38:28.300701 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2025-06-03 15:38:28.300707 | orchestrator | Tuesday 03 June 2025 15:37:50 +0000 (0:00:00.890) 0:05:34.241 ********** 2025-06-03 15:38:28.300714 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:38:28.300721 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:38:28.300727 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:38:28.300734 | orchestrator | 2025-06-03 15:38:28.300740 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2025-06-03 15:38:28.300747 | orchestrator | Tuesday 03 June 2025 15:38:00 +0000 (0:00:09.658) 0:05:43.899 ********** 2025-06-03 15:38:28.300754 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:38:28.300760 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:38:28.300767 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:38:28.300773 | orchestrator | 2025-06-03 15:38:28.300780 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2025-06-03 15:38:28.300787 | orchestrator | Tuesday 03 June 2025 15:38:00 +0000 (0:00:00.729) 0:05:44.629 ********** 2025-06-03 15:38:28.300793 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:38:28.300800 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:38:28.300806 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:38:28.300813 | orchestrator | 2025-06-03 15:38:28.300819 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2025-06-03 15:38:28.300826 | orchestrator | Tuesday 03 June 2025 15:38:09 +0000 (0:00:08.331) 0:05:52.961 ********** 2025-06-03 15:38:28.300833 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:38:28.300839 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:38:28.300846 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:38:28.300852 | orchestrator | 2025-06-03 15:38:28.300859 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2025-06-03 15:38:28.300866 | orchestrator | Tuesday 03 June 2025 15:38:13 +0000 (0:00:03.780) 0:05:56.742 ********** 2025-06-03 15:38:28.300878 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:38:28.300884 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:38:28.300891 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:38:28.300898 | orchestrator | 2025-06-03 15:38:28.300904 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2025-06-03 15:38:28.300911 | orchestrator | Tuesday 03 June 2025 15:38:22 +0000 (0:00:09.593) 0:06:06.335 ********** 2025-06-03 15:38:28.300917 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:28.300944 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:28.300952 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:28.300959 | orchestrator | 2025-06-03 15:38:28.300966 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2025-06-03 15:38:28.300972 | orchestrator | Tuesday 03 June 2025 15:38:23 +0000 (0:00:00.351) 0:06:06.686 ********** 2025-06-03 15:38:28.300979 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:28.300986 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:28.300992 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:28.300999 | orchestrator | 2025-06-03 15:38:28.301005 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2025-06-03 15:38:28.301012 | orchestrator | Tuesday 03 June 2025 15:38:23 +0000 (0:00:00.723) 0:06:07.409 ********** 2025-06-03 15:38:28.301019 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:28.301025 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:28.301031 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:28.301038 | orchestrator | 2025-06-03 15:38:28.301045 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2025-06-03 15:38:28.301051 | orchestrator | Tuesday 03 June 2025 15:38:24 +0000 (0:00:00.368) 0:06:07.778 ********** 2025-06-03 15:38:28.301058 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:28.301064 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:28.301071 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:28.301078 | orchestrator | 2025-06-03 15:38:28.301084 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2025-06-03 15:38:28.301091 | orchestrator | Tuesday 03 June 2025 15:38:24 +0000 (0:00:00.337) 0:06:08.116 ********** 2025-06-03 15:38:28.301098 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:28.301104 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:28.301111 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:28.301118 | orchestrator | 2025-06-03 15:38:28.301124 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2025-06-03 15:38:28.301135 | orchestrator | Tuesday 03 June 2025 15:38:24 +0000 (0:00:00.402) 0:06:08.518 ********** 2025-06-03 15:38:28.301142 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:38:28.301149 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:38:28.301155 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:38:28.301162 | orchestrator | 2025-06-03 15:38:28.301168 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2025-06-03 15:38:28.301175 | orchestrator | Tuesday 03 June 2025 15:38:25 +0000 (0:00:00.711) 0:06:09.229 ********** 2025-06-03 15:38:28.301182 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:38:28.301189 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:38:28.301195 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:38:28.301202 | orchestrator | 2025-06-03 15:38:28.301209 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2025-06-03 15:38:28.301216 | orchestrator | Tuesday 03 June 2025 15:38:26 +0000 (0:00:00.892) 0:06:10.122 ********** 2025-06-03 15:38:28.301222 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:38:28.301229 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:38:28.301235 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:38:28.301242 | orchestrator | 2025-06-03 15:38:28.301249 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-03 15:38:28.301256 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-06-03 15:38:28.301293 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-06-03 15:38:28.301302 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-06-03 15:38:28.301309 | orchestrator | 2025-06-03 15:38:28.301316 | orchestrator | 2025-06-03 15:38:28.301322 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-03 15:38:28.301329 | orchestrator | Tuesday 03 June 2025 15:38:27 +0000 (0:00:00.829) 0:06:10.952 ********** 2025-06-03 15:38:28.301336 | orchestrator | =============================================================================== 2025-06-03 15:38:28.301342 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 9.66s 2025-06-03 15:38:28.301349 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 9.59s 2025-06-03 15:38:28.301356 | orchestrator | loadbalancer : Start backup proxysql container -------------------------- 8.33s 2025-06-03 15:38:28.301362 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 6.22s 2025-06-03 15:38:28.301369 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 6.08s 2025-06-03 15:38:28.301376 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 5.29s 2025-06-03 15:38:28.301382 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 4.92s 2025-06-03 15:38:28.301389 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.72s 2025-06-03 15:38:28.301396 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 4.23s 2025-06-03 15:38:28.301402 | orchestrator | haproxy-config : Add configuration for glance when using single external frontend --- 4.16s 2025-06-03 15:38:28.301409 | orchestrator | sysctl : Setting sysctl values ------------------------------------------ 4.15s 2025-06-03 15:38:28.301415 | orchestrator | loadbalancer : Copying checks for services which are enabled ------------ 4.12s 2025-06-03 15:38:28.301422 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 4.11s 2025-06-03 15:38:28.301429 | orchestrator | haproxy-config : Copying over manila haproxy config --------------------- 3.97s 2025-06-03 15:38:28.301435 | orchestrator | haproxy-config : Copying over horizon haproxy config -------------------- 3.92s 2025-06-03 15:38:28.301442 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 3.87s 2025-06-03 15:38:28.301449 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 3.86s 2025-06-03 15:38:28.301455 | orchestrator | service-cert-copy : loadbalancer | Copying over extra CA certificates --- 3.81s 2025-06-03 15:38:28.301462 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 3.78s 2025-06-03 15:38:28.301469 | orchestrator | loadbalancer : Wait for backup proxysql to start ------------------------ 3.78s 2025-06-03 15:38:31.315202 | orchestrator | 2025-06-03 15:38:31 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:38:31.315802 | orchestrator | 2025-06-03 15:38:31 | INFO  | Task d975f909-c71f-4dcc-a54f-d1176b7bd747 is in state STARTED 2025-06-03 15:38:31.317234 | orchestrator | 2025-06-03 15:38:31 | INFO  | Task d484ed7a-4dc2-4560-958c-f7c55614b831 is in state STARTED 2025-06-03 15:38:31.317279 | orchestrator | 2025-06-03 15:38:31 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:38:34.352870 | orchestrator | 2025-06-03 15:38:34 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:38:34.354377 | orchestrator | 2025-06-03 15:38:34 | INFO  | Task d975f909-c71f-4dcc-a54f-d1176b7bd747 is in state STARTED 2025-06-03 15:38:34.355306 | orchestrator | 2025-06-03 15:38:34 | INFO  | Task d484ed7a-4dc2-4560-958c-f7c55614b831 is in state STARTED 2025-06-03 15:38:34.355472 | orchestrator | 2025-06-03 15:38:34 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:38:37.398104 | orchestrator | 2025-06-03 15:38:37 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:38:37.398207 | orchestrator | 2025-06-03 15:38:37 | INFO  | Task d975f909-c71f-4dcc-a54f-d1176b7bd747 is in state STARTED 2025-06-03 15:38:37.398840 | orchestrator | 2025-06-03 15:38:37 | INFO  | Task d484ed7a-4dc2-4560-958c-f7c55614b831 is in state STARTED 2025-06-03 15:38:37.398905 | orchestrator | 2025-06-03 15:38:37 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:38:40.445291 | orchestrator | 2025-06-03 15:38:40 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:38:40.445454 | orchestrator | 2025-06-03 15:38:40 | INFO  | Task d975f909-c71f-4dcc-a54f-d1176b7bd747 is in state STARTED 2025-06-03 15:38:40.445472 | orchestrator | 2025-06-03 15:38:40 | INFO  | Task d484ed7a-4dc2-4560-958c-f7c55614b831 is in state STARTED 2025-06-03 15:38:40.445484 | orchestrator | 2025-06-03 15:38:40 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:38:43.479003 | orchestrator | 2025-06-03 15:38:43 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:38:43.479108 | orchestrator | 2025-06-03 15:38:43 | INFO  | Task d975f909-c71f-4dcc-a54f-d1176b7bd747 is in state STARTED 2025-06-03 15:38:43.480127 | orchestrator | 2025-06-03 15:38:43 | INFO  | Task d484ed7a-4dc2-4560-958c-f7c55614b831 is in state STARTED 2025-06-03 15:38:43.480222 | orchestrator | 2025-06-03 15:38:43 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:38:46.505948 | orchestrator | 2025-06-03 15:38:46 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:38:46.507748 | orchestrator | 2025-06-03 15:38:46 | INFO  | Task d975f909-c71f-4dcc-a54f-d1176b7bd747 is in state STARTED 2025-06-03 15:38:46.508032 | orchestrator | 2025-06-03 15:38:46 | INFO  | Task d484ed7a-4dc2-4560-958c-f7c55614b831 is in state STARTED 2025-06-03 15:38:46.508204 | orchestrator | 2025-06-03 15:38:46 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:38:49.533689 | orchestrator | 2025-06-03 15:38:49 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:38:49.536252 | orchestrator | 2025-06-03 15:38:49 | INFO  | Task d975f909-c71f-4dcc-a54f-d1176b7bd747 is in state STARTED 2025-06-03 15:38:49.538238 | orchestrator | 2025-06-03 15:38:49 | INFO  | Task d484ed7a-4dc2-4560-958c-f7c55614b831 is in state STARTED 2025-06-03 15:38:49.538528 | orchestrator | 2025-06-03 15:38:49 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:38:52.581290 | orchestrator | 2025-06-03 15:38:52 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:38:52.581520 | orchestrator | 2025-06-03 15:38:52 | INFO  | Task d975f909-c71f-4dcc-a54f-d1176b7bd747 is in state STARTED 2025-06-03 15:38:52.582469 | orchestrator | 2025-06-03 15:38:52 | INFO  | Task d484ed7a-4dc2-4560-958c-f7c55614b831 is in state STARTED 2025-06-03 15:38:52.582511 | orchestrator | 2025-06-03 15:38:52 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:38:55.617183 | orchestrator | 2025-06-03 15:38:55 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:38:55.617813 | orchestrator | 2025-06-03 15:38:55 | INFO  | Task d975f909-c71f-4dcc-a54f-d1176b7bd747 is in state STARTED 2025-06-03 15:38:55.618916 | orchestrator | 2025-06-03 15:38:55 | INFO  | Task d484ed7a-4dc2-4560-958c-f7c55614b831 is in state STARTED 2025-06-03 15:38:55.618981 | orchestrator | 2025-06-03 15:38:55 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:38:58.670284 | orchestrator | 2025-06-03 15:38:58 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:38:58.671136 | orchestrator | 2025-06-03 15:38:58 | INFO  | Task d975f909-c71f-4dcc-a54f-d1176b7bd747 is in state STARTED 2025-06-03 15:38:58.672191 | orchestrator | 2025-06-03 15:38:58 | INFO  | Task d484ed7a-4dc2-4560-958c-f7c55614b831 is in state STARTED 2025-06-03 15:38:58.672242 | orchestrator | 2025-06-03 15:38:58 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:39:01.714609 | orchestrator | 2025-06-03 15:39:01 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:39:01.717329 | orchestrator | 2025-06-03 15:39:01 | INFO  | Task d975f909-c71f-4dcc-a54f-d1176b7bd747 is in state STARTED 2025-06-03 15:39:01.718503 | orchestrator | 2025-06-03 15:39:01 | INFO  | Task d484ed7a-4dc2-4560-958c-f7c55614b831 is in state STARTED 2025-06-03 15:39:01.718543 | orchestrator | 2025-06-03 15:39:01 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:39:04.803238 | orchestrator | 2025-06-03 15:39:04 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:39:04.804499 | orchestrator | 2025-06-03 15:39:04 | INFO  | Task d975f909-c71f-4dcc-a54f-d1176b7bd747 is in state STARTED 2025-06-03 15:39:04.806703 | orchestrator | 2025-06-03 15:39:04 | INFO  | Task d484ed7a-4dc2-4560-958c-f7c55614b831 is in state STARTED 2025-06-03 15:39:04.806770 | orchestrator | 2025-06-03 15:39:04 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:39:07.864760 | orchestrator | 2025-06-03 15:39:07 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:39:07.866519 | orchestrator | 2025-06-03 15:39:07 | INFO  | Task d975f909-c71f-4dcc-a54f-d1176b7bd747 is in state STARTED 2025-06-03 15:39:07.866979 | orchestrator | 2025-06-03 15:39:07 | INFO  | Task d484ed7a-4dc2-4560-958c-f7c55614b831 is in state STARTED 2025-06-03 15:39:07.867014 | orchestrator | 2025-06-03 15:39:07 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:39:10.905282 | orchestrator | 2025-06-03 15:39:10 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:39:10.906167 | orchestrator | 2025-06-03 15:39:10 | INFO  | Task d975f909-c71f-4dcc-a54f-d1176b7bd747 is in state STARTED 2025-06-03 15:39:10.907835 | orchestrator | 2025-06-03 15:39:10 | INFO  | Task d484ed7a-4dc2-4560-958c-f7c55614b831 is in state STARTED 2025-06-03 15:39:10.907901 | orchestrator | 2025-06-03 15:39:10 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:39:13.951994 | orchestrator | 2025-06-03 15:39:13 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:39:13.953702 | orchestrator | 2025-06-03 15:39:13 | INFO  | Task d975f909-c71f-4dcc-a54f-d1176b7bd747 is in state STARTED 2025-06-03 15:39:13.955321 | orchestrator | 2025-06-03 15:39:13 | INFO  | Task d484ed7a-4dc2-4560-958c-f7c55614b831 is in state STARTED 2025-06-03 15:39:13.955364 | orchestrator | 2025-06-03 15:39:13 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:39:17.009058 | orchestrator | 2025-06-03 15:39:17 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:39:17.009820 | orchestrator | 2025-06-03 15:39:17 | INFO  | Task d975f909-c71f-4dcc-a54f-d1176b7bd747 is in state STARTED 2025-06-03 15:39:17.010848 | orchestrator | 2025-06-03 15:39:17 | INFO  | Task d484ed7a-4dc2-4560-958c-f7c55614b831 is in state STARTED 2025-06-03 15:39:17.010859 | orchestrator | 2025-06-03 15:39:17 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:39:20.069427 | orchestrator | 2025-06-03 15:39:20 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:39:20.071062 | orchestrator | 2025-06-03 15:39:20 | INFO  | Task d975f909-c71f-4dcc-a54f-d1176b7bd747 is in state STARTED 2025-06-03 15:39:20.071897 | orchestrator | 2025-06-03 15:39:20 | INFO  | Task d484ed7a-4dc2-4560-958c-f7c55614b831 is in state STARTED 2025-06-03 15:39:20.071922 | orchestrator | 2025-06-03 15:39:20 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:39:23.118671 | orchestrator | 2025-06-03 15:39:23 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:39:23.118721 | orchestrator | 2025-06-03 15:39:23 | INFO  | Task d975f909-c71f-4dcc-a54f-d1176b7bd747 is in state STARTED 2025-06-03 15:39:23.119276 | orchestrator | 2025-06-03 15:39:23 | INFO  | Task d484ed7a-4dc2-4560-958c-f7c55614b831 is in state STARTED 2025-06-03 15:39:23.119288 | orchestrator | 2025-06-03 15:39:23 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:39:26.162805 | orchestrator | 2025-06-03 15:39:26 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:39:26.164308 | orchestrator | 2025-06-03 15:39:26 | INFO  | Task d975f909-c71f-4dcc-a54f-d1176b7bd747 is in state STARTED 2025-06-03 15:39:26.164851 | orchestrator | 2025-06-03 15:39:26 | INFO  | Task d484ed7a-4dc2-4560-958c-f7c55614b831 is in state STARTED 2025-06-03 15:39:26.165288 | orchestrator | 2025-06-03 15:39:26 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:39:29.211996 | orchestrator | 2025-06-03 15:39:29 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:39:29.213857 | orchestrator | 2025-06-03 15:39:29 | INFO  | Task d975f909-c71f-4dcc-a54f-d1176b7bd747 is in state STARTED 2025-06-03 15:39:29.215976 | orchestrator | 2025-06-03 15:39:29 | INFO  | Task d484ed7a-4dc2-4560-958c-f7c55614b831 is in state STARTED 2025-06-03 15:39:29.216061 | orchestrator | 2025-06-03 15:39:29 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:39:32.260385 | orchestrator | 2025-06-03 15:39:32 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:39:32.264086 | orchestrator | 2025-06-03 15:39:32 | INFO  | Task d975f909-c71f-4dcc-a54f-d1176b7bd747 is in state STARTED 2025-06-03 15:39:32.264413 | orchestrator | 2025-06-03 15:39:32 | INFO  | Task d484ed7a-4dc2-4560-958c-f7c55614b831 is in state STARTED 2025-06-03 15:39:32.265458 | orchestrator | 2025-06-03 15:39:32 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:39:35.304076 | orchestrator | 2025-06-03 15:39:35 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:39:35.307491 | orchestrator | 2025-06-03 15:39:35 | INFO  | Task d975f909-c71f-4dcc-a54f-d1176b7bd747 is in state STARTED 2025-06-03 15:39:35.310578 | orchestrator | 2025-06-03 15:39:35 | INFO  | Task d484ed7a-4dc2-4560-958c-f7c55614b831 is in state STARTED 2025-06-03 15:39:35.310976 | orchestrator | 2025-06-03 15:39:35 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:39:38.359500 | orchestrator | 2025-06-03 15:39:38 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:39:38.362557 | orchestrator | 2025-06-03 15:39:38 | INFO  | Task d975f909-c71f-4dcc-a54f-d1176b7bd747 is in state STARTED 2025-06-03 15:39:38.365039 | orchestrator | 2025-06-03 15:39:38 | INFO  | Task d484ed7a-4dc2-4560-958c-f7c55614b831 is in state STARTED 2025-06-03 15:39:38.365101 | orchestrator | 2025-06-03 15:39:38 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:39:41.408266 | orchestrator | 2025-06-03 15:39:41 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:39:41.411940 | orchestrator | 2025-06-03 15:39:41 | INFO  | Task d975f909-c71f-4dcc-a54f-d1176b7bd747 is in state STARTED 2025-06-03 15:39:41.413293 | orchestrator | 2025-06-03 15:39:41 | INFO  | Task d484ed7a-4dc2-4560-958c-f7c55614b831 is in state STARTED 2025-06-03 15:39:41.413474 | orchestrator | 2025-06-03 15:39:41 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:39:44.463728 | orchestrator | 2025-06-03 15:39:44 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:39:44.467244 | orchestrator | 2025-06-03 15:39:44 | INFO  | Task d975f909-c71f-4dcc-a54f-d1176b7bd747 is in state STARTED 2025-06-03 15:39:44.467331 | orchestrator | 2025-06-03 15:39:44 | INFO  | Task d484ed7a-4dc2-4560-958c-f7c55614b831 is in state STARTED 2025-06-03 15:39:44.467348 | orchestrator | 2025-06-03 15:39:44 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:39:47.516450 | orchestrator | 2025-06-03 15:39:47 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:39:47.518881 | orchestrator | 2025-06-03 15:39:47 | INFO  | Task d975f909-c71f-4dcc-a54f-d1176b7bd747 is in state STARTED 2025-06-03 15:39:47.520823 | orchestrator | 2025-06-03 15:39:47 | INFO  | Task d484ed7a-4dc2-4560-958c-f7c55614b831 is in state STARTED 2025-06-03 15:39:47.520901 | orchestrator | 2025-06-03 15:39:47 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:39:50.568600 | orchestrator | 2025-06-03 15:39:50 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:39:50.570011 | orchestrator | 2025-06-03 15:39:50 | INFO  | Task d975f909-c71f-4dcc-a54f-d1176b7bd747 is in state STARTED 2025-06-03 15:39:50.574088 | orchestrator | 2025-06-03 15:39:50 | INFO  | Task d484ed7a-4dc2-4560-958c-f7c55614b831 is in state STARTED 2025-06-03 15:39:50.574130 | orchestrator | 2025-06-03 15:39:50 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:39:53.620503 | orchestrator | 2025-06-03 15:39:53 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:39:53.622706 | orchestrator | 2025-06-03 15:39:53 | INFO  | Task d975f909-c71f-4dcc-a54f-d1176b7bd747 is in state STARTED 2025-06-03 15:39:53.624973 | orchestrator | 2025-06-03 15:39:53 | INFO  | Task d484ed7a-4dc2-4560-958c-f7c55614b831 is in state STARTED 2025-06-03 15:39:53.625030 | orchestrator | 2025-06-03 15:39:53 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:39:56.674521 | orchestrator | 2025-06-03 15:39:56 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:39:56.675061 | orchestrator | 2025-06-03 15:39:56 | INFO  | Task d975f909-c71f-4dcc-a54f-d1176b7bd747 is in state STARTED 2025-06-03 15:39:56.676914 | orchestrator | 2025-06-03 15:39:56 | INFO  | Task d484ed7a-4dc2-4560-958c-f7c55614b831 is in state STARTED 2025-06-03 15:39:56.676944 | orchestrator | 2025-06-03 15:39:56 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:39:59.728103 | orchestrator | 2025-06-03 15:39:59 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:39:59.729750 | orchestrator | 2025-06-03 15:39:59 | INFO  | Task d975f909-c71f-4dcc-a54f-d1176b7bd747 is in state STARTED 2025-06-03 15:39:59.731535 | orchestrator | 2025-06-03 15:39:59 | INFO  | Task d484ed7a-4dc2-4560-958c-f7c55614b831 is in state STARTED 2025-06-03 15:39:59.731640 | orchestrator | 2025-06-03 15:39:59 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:40:02.776670 | orchestrator | 2025-06-03 15:40:02 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:40:02.777873 | orchestrator | 2025-06-03 15:40:02 | INFO  | Task d975f909-c71f-4dcc-a54f-d1176b7bd747 is in state STARTED 2025-06-03 15:40:02.779528 | orchestrator | 2025-06-03 15:40:02 | INFO  | Task d484ed7a-4dc2-4560-958c-f7c55614b831 is in state STARTED 2025-06-03 15:40:02.779635 | orchestrator | 2025-06-03 15:40:02 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:40:05.816558 | orchestrator | 2025-06-03 15:40:05 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:40:05.817013 | orchestrator | 2025-06-03 15:40:05 | INFO  | Task d975f909-c71f-4dcc-a54f-d1176b7bd747 is in state STARTED 2025-06-03 15:40:05.819315 | orchestrator | 2025-06-03 15:40:05 | INFO  | Task d484ed7a-4dc2-4560-958c-f7c55614b831 is in state STARTED 2025-06-03 15:40:05.819378 | orchestrator | 2025-06-03 15:40:05 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:40:08.859194 | orchestrator | 2025-06-03 15:40:08 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:40:08.863081 | orchestrator | 2025-06-03 15:40:08 | INFO  | Task d975f909-c71f-4dcc-a54f-d1176b7bd747 is in state STARTED 2025-06-03 15:40:08.866130 | orchestrator | 2025-06-03 15:40:08 | INFO  | Task d484ed7a-4dc2-4560-958c-f7c55614b831 is in state STARTED 2025-06-03 15:40:08.866194 | orchestrator | 2025-06-03 15:40:08 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:40:11.906698 | orchestrator | 2025-06-03 15:40:11 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:40:11.907625 | orchestrator | 2025-06-03 15:40:11 | INFO  | Task d975f909-c71f-4dcc-a54f-d1176b7bd747 is in state STARTED 2025-06-03 15:40:11.913078 | orchestrator | 2025-06-03 15:40:11 | INFO  | Task d484ed7a-4dc2-4560-958c-f7c55614b831 is in state STARTED 2025-06-03 15:40:11.913125 | orchestrator | 2025-06-03 15:40:11 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:40:14.987374 | orchestrator | 2025-06-03 15:40:14 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:40:14.992490 | orchestrator | 2025-06-03 15:40:14 | INFO  | Task d975f909-c71f-4dcc-a54f-d1176b7bd747 is in state STARTED 2025-06-03 15:40:14.996472 | orchestrator | 2025-06-03 15:40:14 | INFO  | Task d484ed7a-4dc2-4560-958c-f7c55614b831 is in state STARTED 2025-06-03 15:40:14.996527 | orchestrator | 2025-06-03 15:40:14 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:40:18.039502 | orchestrator | 2025-06-03 15:40:18 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:40:18.039612 | orchestrator | 2025-06-03 15:40:18 | INFO  | Task d975f909-c71f-4dcc-a54f-d1176b7bd747 is in state STARTED 2025-06-03 15:40:18.040827 | orchestrator | 2025-06-03 15:40:18 | INFO  | Task d484ed7a-4dc2-4560-958c-f7c55614b831 is in state STARTED 2025-06-03 15:40:18.040862 | orchestrator | 2025-06-03 15:40:18 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:40:21.083798 | orchestrator | 2025-06-03 15:40:21 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:40:21.085364 | orchestrator | 2025-06-03 15:40:21 | INFO  | Task d975f909-c71f-4dcc-a54f-d1176b7bd747 is in state STARTED 2025-06-03 15:40:21.086766 | orchestrator | 2025-06-03 15:40:21 | INFO  | Task d484ed7a-4dc2-4560-958c-f7c55614b831 is in state STARTED 2025-06-03 15:40:21.086806 | orchestrator | 2025-06-03 15:40:21 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:40:24.132254 | orchestrator | 2025-06-03 15:40:24 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:40:24.133623 | orchestrator | 2025-06-03 15:40:24 | INFO  | Task d975f909-c71f-4dcc-a54f-d1176b7bd747 is in state STARTED 2025-06-03 15:40:24.135581 | orchestrator | 2025-06-03 15:40:24 | INFO  | Task d484ed7a-4dc2-4560-958c-f7c55614b831 is in state STARTED 2025-06-03 15:40:24.135638 | orchestrator | 2025-06-03 15:40:24 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:40:27.179717 | orchestrator | 2025-06-03 15:40:27 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:40:27.181378 | orchestrator | 2025-06-03 15:40:27 | INFO  | Task d975f909-c71f-4dcc-a54f-d1176b7bd747 is in state STARTED 2025-06-03 15:40:27.183309 | orchestrator | 2025-06-03 15:40:27 | INFO  | Task d484ed7a-4dc2-4560-958c-f7c55614b831 is in state STARTED 2025-06-03 15:40:27.183582 | orchestrator | 2025-06-03 15:40:27 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:40:30.229746 | orchestrator | 2025-06-03 15:40:30 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:40:30.232119 | orchestrator | 2025-06-03 15:40:30 | INFO  | Task d975f909-c71f-4dcc-a54f-d1176b7bd747 is in state STARTED 2025-06-03 15:40:30.234396 | orchestrator | 2025-06-03 15:40:30 | INFO  | Task d484ed7a-4dc2-4560-958c-f7c55614b831 is in state STARTED 2025-06-03 15:40:30.234443 | orchestrator | 2025-06-03 15:40:30 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:40:33.286272 | orchestrator | 2025-06-03 15:40:33 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:40:33.287789 | orchestrator | 2025-06-03 15:40:33 | INFO  | Task d975f909-c71f-4dcc-a54f-d1176b7bd747 is in state STARTED 2025-06-03 15:40:33.289621 | orchestrator | 2025-06-03 15:40:33 | INFO  | Task d484ed7a-4dc2-4560-958c-f7c55614b831 is in state STARTED 2025-06-03 15:40:33.289653 | orchestrator | 2025-06-03 15:40:33 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:40:36.333974 | orchestrator | 2025-06-03 15:40:36 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:40:36.336947 | orchestrator | 2025-06-03 15:40:36 | INFO  | Task d975f909-c71f-4dcc-a54f-d1176b7bd747 is in state STARTED 2025-06-03 15:40:36.338253 | orchestrator | 2025-06-03 15:40:36 | INFO  | Task d484ed7a-4dc2-4560-958c-f7c55614b831 is in state STARTED 2025-06-03 15:40:36.338445 | orchestrator | 2025-06-03 15:40:36 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:40:39.387048 | orchestrator | 2025-06-03 15:40:39 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:40:39.388384 | orchestrator | 2025-06-03 15:40:39 | INFO  | Task d975f909-c71f-4dcc-a54f-d1176b7bd747 is in state STARTED 2025-06-03 15:40:39.390639 | orchestrator | 2025-06-03 15:40:39 | INFO  | Task d484ed7a-4dc2-4560-958c-f7c55614b831 is in state STARTED 2025-06-03 15:40:39.391645 | orchestrator | 2025-06-03 15:40:39 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:40:42.425252 | orchestrator | 2025-06-03 15:40:42 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:40:42.425358 | orchestrator | 2025-06-03 15:40:42 | INFO  | Task d975f909-c71f-4dcc-a54f-d1176b7bd747 is in state STARTED 2025-06-03 15:40:42.426275 | orchestrator | 2025-06-03 15:40:42 | INFO  | Task d484ed7a-4dc2-4560-958c-f7c55614b831 is in state STARTED 2025-06-03 15:40:42.426376 | orchestrator | 2025-06-03 15:40:42 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:40:45.476478 | orchestrator | 2025-06-03 15:40:45 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:40:45.476609 | orchestrator | 2025-06-03 15:40:45 | INFO  | Task d975f909-c71f-4dcc-a54f-d1176b7bd747 is in state STARTED 2025-06-03 15:40:45.476617 | orchestrator | 2025-06-03 15:40:45 | INFO  | Task d484ed7a-4dc2-4560-958c-f7c55614b831 is in state STARTED 2025-06-03 15:40:45.476621 | orchestrator | 2025-06-03 15:40:45 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:40:48.517997 | orchestrator | 2025-06-03 15:40:48 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:40:48.520010 | orchestrator | 2025-06-03 15:40:48 | INFO  | Task d975f909-c71f-4dcc-a54f-d1176b7bd747 is in state STARTED 2025-06-03 15:40:48.521232 | orchestrator | 2025-06-03 15:40:48 | INFO  | Task d484ed7a-4dc2-4560-958c-f7c55614b831 is in state STARTED 2025-06-03 15:40:48.521281 | orchestrator | 2025-06-03 15:40:48 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:40:51.569337 | orchestrator | 2025-06-03 15:40:51 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:40:51.572792 | orchestrator | 2025-06-03 15:40:51 | INFO  | Task d975f909-c71f-4dcc-a54f-d1176b7bd747 is in state STARTED 2025-06-03 15:40:51.574942 | orchestrator | 2025-06-03 15:40:51 | INFO  | Task d484ed7a-4dc2-4560-958c-f7c55614b831 is in state STARTED 2025-06-03 15:40:51.574993 | orchestrator | 2025-06-03 15:40:51 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:40:54.635787 | orchestrator | 2025-06-03 15:40:54 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state STARTED 2025-06-03 15:40:54.636426 | orchestrator | 2025-06-03 15:40:54 | INFO  | Task d975f909-c71f-4dcc-a54f-d1176b7bd747 is in state STARTED 2025-06-03 15:40:54.639757 | orchestrator | 2025-06-03 15:40:54 | INFO  | Task d484ed7a-4dc2-4560-958c-f7c55614b831 is in state STARTED 2025-06-03 15:40:54.639812 | orchestrator | 2025-06-03 15:40:54 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:40:57.690853 | orchestrator | 2025-06-03 15:40:57 | INFO  | Task ed6b53e8-2e4e-43bc-928b-e92ef6fd78d0 is in state SUCCESS 2025-06-03 15:40:57.694494 | orchestrator | 2025-06-03 15:40:57.694599 | orchestrator | 2025-06-03 15:40:57.694615 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2025-06-03 15:40:57.694627 | orchestrator | 2025-06-03 15:40:57.694683 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-06-03 15:40:57.694695 | orchestrator | Tuesday 03 June 2025 15:29:41 +0000 (0:00:00.723) 0:00:00.723 ********** 2025-06-03 15:40:57.694707 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-03 15:40:57.694718 | orchestrator | 2025-06-03 15:40:57.694728 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-06-03 15:40:57.694738 | orchestrator | Tuesday 03 June 2025 15:29:42 +0000 (0:00:01.101) 0:00:01.824 ********** 2025-06-03 15:40:57.694798 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:57.694811 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:40:57.694821 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:40:57.694831 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:57.694841 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:40:57.694851 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:57.694860 | orchestrator | 2025-06-03 15:40:57.694870 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-06-03 15:40:57.694880 | orchestrator | Tuesday 03 June 2025 15:29:44 +0000 (0:00:01.645) 0:00:03.469 ********** 2025-06-03 15:40:57.694890 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:40:57.694900 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:40:57.694909 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:40:57.694919 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:57.694929 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:57.695034 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:57.695045 | orchestrator | 2025-06-03 15:40:57.695056 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-06-03 15:40:57.695068 | orchestrator | Tuesday 03 June 2025 15:29:45 +0000 (0:00:00.844) 0:00:04.314 ********** 2025-06-03 15:40:57.695079 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:40:57.695115 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:40:57.695126 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:40:57.695137 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:57.695147 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:57.695159 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:57.695170 | orchestrator | 2025-06-03 15:40:57.695182 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-06-03 15:40:57.695193 | orchestrator | Tuesday 03 June 2025 15:29:46 +0000 (0:00:00.982) 0:00:05.297 ********** 2025-06-03 15:40:57.695204 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:40:57.695215 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:40:57.695225 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:40:57.695236 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:57.695246 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:57.695257 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:57.695269 | orchestrator | 2025-06-03 15:40:57.695280 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-06-03 15:40:57.695292 | orchestrator | Tuesday 03 June 2025 15:29:47 +0000 (0:00:00.751) 0:00:06.048 ********** 2025-06-03 15:40:57.695302 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:40:57.695313 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:40:57.695324 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:40:57.695335 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:57.695346 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:57.695357 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:57.695367 | orchestrator | 2025-06-03 15:40:57.695379 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-06-03 15:40:57.695390 | orchestrator | Tuesday 03 June 2025 15:29:47 +0000 (0:00:00.745) 0:00:06.794 ********** 2025-06-03 15:40:57.695442 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:40:57.695453 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:40:57.695462 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:40:57.695505 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:57.695516 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:57.695543 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:57.695553 | orchestrator | 2025-06-03 15:40:57.695563 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-06-03 15:40:57.695603 | orchestrator | Tuesday 03 June 2025 15:29:48 +0000 (0:00:01.177) 0:00:07.971 ********** 2025-06-03 15:40:57.695613 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:57.695656 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:57.695667 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:57.695695 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.695707 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:57.695716 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:57.695726 | orchestrator | 2025-06-03 15:40:57.695735 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-06-03 15:40:57.695745 | orchestrator | Tuesday 03 June 2025 15:29:49 +0000 (0:00:00.913) 0:00:08.885 ********** 2025-06-03 15:40:57.695853 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:40:57.695864 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:40:57.695874 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:40:57.695884 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:57.695893 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:57.695925 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:57.695937 | orchestrator | 2025-06-03 15:40:57.695946 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-06-03 15:40:57.695956 | orchestrator | Tuesday 03 June 2025 15:29:50 +0000 (0:00:00.999) 0:00:09.884 ********** 2025-06-03 15:40:57.695966 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-06-03 15:40:57.695990 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-03 15:40:57.696000 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-03 15:40:57.696010 | orchestrator | 2025-06-03 15:40:57.696020 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-06-03 15:40:57.696038 | orchestrator | Tuesday 03 June 2025 15:29:51 +0000 (0:00:00.632) 0:00:10.517 ********** 2025-06-03 15:40:57.696048 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:40:57.696058 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:40:57.696067 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:40:57.696077 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:57.696086 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:57.696095 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:57.696105 | orchestrator | 2025-06-03 15:40:57.696164 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-06-03 15:40:57.696176 | orchestrator | Tuesday 03 June 2025 15:29:52 +0000 (0:00:01.353) 0:00:11.871 ********** 2025-06-03 15:40:57.696212 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-06-03 15:40:57.696222 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-03 15:40:57.696232 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-03 15:40:57.696242 | orchestrator | 2025-06-03 15:40:57.696252 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-06-03 15:40:57.696261 | orchestrator | Tuesday 03 June 2025 15:29:55 +0000 (0:00:03.069) 0:00:14.940 ********** 2025-06-03 15:40:57.696271 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-06-03 15:40:57.696281 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-06-03 15:40:57.696291 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-06-03 15:40:57.696300 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:57.696310 | orchestrator | 2025-06-03 15:40:57.696320 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-06-03 15:40:57.696329 | orchestrator | Tuesday 03 June 2025 15:29:56 +0000 (0:00:00.716) 0:00:15.657 ********** 2025-06-03 15:40:57.696341 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-06-03 15:40:57.696354 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-06-03 15:40:57.696364 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-06-03 15:40:57.696374 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:57.696384 | orchestrator | 2025-06-03 15:40:57.696393 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-06-03 15:40:57.696403 | orchestrator | Tuesday 03 June 2025 15:29:57 +0000 (0:00:01.242) 0:00:16.900 ********** 2025-06-03 15:40:57.696415 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:57.696427 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:57.696438 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:57.696507 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:57.696519 | orchestrator | 2025-06-03 15:40:57.696546 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-06-03 15:40:57.696555 | orchestrator | Tuesday 03 June 2025 15:29:58 +0000 (0:00:00.390) 0:00:17.290 ********** 2025-06-03 15:40:57.696573 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-06-03 15:29:53.501772', 'end': '2025-06-03 15:29:53.801024', 'delta': '0:00:00.299252', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-06-03 15:40:57.696596 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-06-03 15:29:54.458962', 'end': '2025-06-03 15:29:54.758054', 'delta': '0:00:00.299092', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-06-03 15:40:57.696607 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-06-03 15:29:55.430976', 'end': '2025-06-03 15:29:55.717256', 'delta': '0:00:00.286280', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-06-03 15:40:57.696617 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:57.696627 | orchestrator | 2025-06-03 15:40:57.696637 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-06-03 15:40:57.696647 | orchestrator | Tuesday 03 June 2025 15:29:58 +0000 (0:00:00.175) 0:00:17.466 ********** 2025-06-03 15:40:57.696657 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:40:57.696667 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:40:57.696770 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:40:57.696812 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:57.696822 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:57.696832 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:57.696841 | orchestrator | 2025-06-03 15:40:57.696851 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-06-03 15:40:57.696861 | orchestrator | Tuesday 03 June 2025 15:29:59 +0000 (0:00:01.481) 0:00:18.947 ********** 2025-06-03 15:40:57.696870 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:40:57.696880 | orchestrator | 2025-06-03 15:40:57.696890 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-06-03 15:40:57.696906 | orchestrator | Tuesday 03 June 2025 15:30:00 +0000 (0:00:00.792) 0:00:19.739 ********** 2025-06-03 15:40:57.696916 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:57.696926 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:57.696935 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:57.696945 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.696955 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:57.696964 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:57.696974 | orchestrator | 2025-06-03 15:40:57.696984 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-06-03 15:40:57.696993 | orchestrator | Tuesday 03 June 2025 15:30:02 +0000 (0:00:01.599) 0:00:21.339 ********** 2025-06-03 15:40:57.697003 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:57.697012 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:57.697022 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:57.697031 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.697041 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:57.697050 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:57.697060 | orchestrator | 2025-06-03 15:40:57.697069 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-06-03 15:40:57.697079 | orchestrator | Tuesday 03 June 2025 15:30:03 +0000 (0:00:01.326) 0:00:22.668 ********** 2025-06-03 15:40:57.697089 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:57.697098 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:57.697108 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:57.697117 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.697127 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:57.697136 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:57.697146 | orchestrator | 2025-06-03 15:40:57.697155 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-06-03 15:40:57.697165 | orchestrator | Tuesday 03 June 2025 15:30:04 +0000 (0:00:00.959) 0:00:23.627 ********** 2025-06-03 15:40:57.697175 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:57.697184 | orchestrator | 2025-06-03 15:40:57.697194 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-06-03 15:40:57.697203 | orchestrator | Tuesday 03 June 2025 15:30:04 +0000 (0:00:00.182) 0:00:23.810 ********** 2025-06-03 15:40:57.697218 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:57.697228 | orchestrator | 2025-06-03 15:40:57.697238 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-06-03 15:40:57.697306 | orchestrator | Tuesday 03 June 2025 15:30:05 +0000 (0:00:00.335) 0:00:24.146 ********** 2025-06-03 15:40:57.697317 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:57.697326 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:57.697363 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:57.697375 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.697385 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:57.697395 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:57.697404 | orchestrator | 2025-06-03 15:40:57.697414 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-06-03 15:40:57.697430 | orchestrator | Tuesday 03 June 2025 15:30:05 +0000 (0:00:00.712) 0:00:24.859 ********** 2025-06-03 15:40:57.697440 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:57.697450 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:57.697460 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:57.697469 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.697478 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:57.697488 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:57.697497 | orchestrator | 2025-06-03 15:40:57.697507 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-06-03 15:40:57.697516 | orchestrator | Tuesday 03 June 2025 15:30:06 +0000 (0:00:00.714) 0:00:25.573 ********** 2025-06-03 15:40:57.697612 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:57.697789 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:57.697801 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:57.697810 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.697820 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:57.697829 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:57.697838 | orchestrator | 2025-06-03 15:40:57.697848 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-06-03 15:40:57.697858 | orchestrator | Tuesday 03 June 2025 15:30:07 +0000 (0:00:00.649) 0:00:26.223 ********** 2025-06-03 15:40:57.697896 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:57.697906 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:57.697916 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:57.697925 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.697934 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:57.697944 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:57.697954 | orchestrator | 2025-06-03 15:40:57.697963 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-06-03 15:40:57.697973 | orchestrator | Tuesday 03 June 2025 15:30:07 +0000 (0:00:00.619) 0:00:26.843 ********** 2025-06-03 15:40:57.697983 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:57.697992 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:57.698145 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:57.698155 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.698163 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:57.698171 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:57.698179 | orchestrator | 2025-06-03 15:40:57.698187 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-06-03 15:40:57.698195 | orchestrator | Tuesday 03 June 2025 15:30:08 +0000 (0:00:00.541) 0:00:27.385 ********** 2025-06-03 15:40:57.698203 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:57.698210 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:57.698218 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:57.698226 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.698234 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:57.698242 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:57.698249 | orchestrator | 2025-06-03 15:40:57.698257 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-06-03 15:40:57.698265 | orchestrator | Tuesday 03 June 2025 15:30:09 +0000 (0:00:00.652) 0:00:28.038 ********** 2025-06-03 15:40:57.698273 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:57.698281 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:57.698289 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:57.698296 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.698304 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:57.698312 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:57.698320 | orchestrator | 2025-06-03 15:40:57.698328 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-06-03 15:40:57.698336 | orchestrator | Tuesday 03 June 2025 15:30:09 +0000 (0:00:00.552) 0:00:28.590 ********** 2025-06-03 15:40:57.698344 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-03 15:40:57.698354 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-03 15:40:57.698376 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-03 15:40:57.698441 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-03 15:40:57.698462 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-03 15:40:57.698471 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-03 15:40:57.698479 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-03 15:40:57.698488 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-03 15:40:57.698506 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8b65d546-d325-4a0d-b120-75afa88c00de', 'scsi-SQEMU_QEMU_HARDDISK_8b65d546-d325-4a0d-b120-75afa88c00de'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8b65d546-d325-4a0d-b120-75afa88c00de-part1', 'scsi-SQEMU_QEMU_HARDDISK_8b65d546-d325-4a0d-b120-75afa88c00de-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8b65d546-d325-4a0d-b120-75afa88c00de-part14', 'scsi-SQEMU_QEMU_HARDDISK_8b65d546-d325-4a0d-b120-75afa88c00de-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8b65d546-d325-4a0d-b120-75afa88c00de-part15', 'scsi-SQEMU_QEMU_HARDDISK_8b65d546-d325-4a0d-b120-75afa88c00de-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8b65d546-d325-4a0d-b120-75afa88c00de-part16', 'scsi-SQEMU_QEMU_HARDDISK_8b65d546-d325-4a0d-b120-75afa88c00de-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-03 15:40:57.698549 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-03-14-50-19-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-03 15:40:57.698559 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-03 15:40:57.698568 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-03 15:40:57.698576 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-03 15:40:57.698584 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-03 15:40:57.698592 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-03 15:40:57.698600 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-03 15:40:57.698609 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-03 15:40:57.698634 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-03 15:40:57.698649 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7cf15848-6f73-4878-927d-31873f9154b7', 'scsi-SQEMU_QEMU_HARDDISK_7cf15848-6f73-4878-927d-31873f9154b7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7cf15848-6f73-4878-927d-31873f9154b7-part1', 'scsi-SQEMU_QEMU_HARDDISK_7cf15848-6f73-4878-927d-31873f9154b7-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7cf15848-6f73-4878-927d-31873f9154b7-part14', 'scsi-SQEMU_QEMU_HARDDISK_7cf15848-6f73-4878-927d-31873f9154b7-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7cf15848-6f73-4878-927d-31873f9154b7-part15', 'scsi-SQEMU_QEMU_HARDDISK_7cf15848-6f73-4878-927d-31873f9154b7-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7cf15848-6f73-4878-927d-31873f9154b7-part16', 'scsi-SQEMU_QEMU_HARDDISK_7cf15848-6f73-4878-927d-31873f9154b7-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-03 15:40:57.698661 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-03-14-50-20-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-03 15:40:57.698669 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:57.698678 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-03 15:40:57.698691 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-03 15:40:57.698700 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-03 15:40:57.698711 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-03 15:40:57.698725 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-03 15:40:57.698733 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-03 15:40:57.698742 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-03 15:40:57.698750 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:57.698758 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-03 15:40:57.698771 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_420efcde-9aa9-4277-94e6-eff067055985', 'scsi-SQEMU_QEMU_HARDDISK_420efcde-9aa9-4277-94e6-eff067055985'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_420efcde-9aa9-4277-94e6-eff067055985-part1', 'scsi-SQEMU_QEMU_HARDDISK_420efcde-9aa9-4277-94e6-eff067055985-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_420efcde-9aa9-4277-94e6-eff067055985-part14', 'scsi-SQEMU_QEMU_HARDDISK_420efcde-9aa9-4277-94e6-eff067055985-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_420efcde-9aa9-4277-94e6-eff067055985-part15', 'scsi-SQEMU_QEMU_HARDDISK_420efcde-9aa9-4277-94e6-eff067055985-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_420efcde-9aa9-4277-94e6-eff067055985-part16', 'scsi-SQEMU_QEMU_HARDDISK_420efcde-9aa9-4277-94e6-eff067055985-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-03 15:40:57.701356 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-03-14-50-17-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-03 15:40:57.701445 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--5a262827--4eba--5d37--ab06--09e1d49a835c-osd--block--5a262827--4eba--5d37--ab06--09e1d49a835c', 'dm-uuid-LVM-iCTuAI4EJib0jwbvb8c4dXUAVjPvH6yyQD7EGdtmsu0AgRLszQFCT51KxWbLYqCJ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-03 15:40:57.701462 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d47078ac--4564--569b--bfa7--6d988d420f95-osd--block--d47078ac--4564--569b--bfa7--6d988d420f95', 'dm-uuid-LVM-MlNOD7DMw9sVFxWua6nlui2P6JGLIXhA9i9s0R6rxyRXeXmxqEKjHCeK1WnDSagY'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-03 15:40:57.701475 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-03 15:40:57.701489 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-03 15:40:57.701500 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-03 15:40:57.701569 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:57.701585 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-03 15:40:57.701597 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-03 15:40:57.701617 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-03 15:40:57.701652 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-03 15:40:57.701672 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-03 15:40:57.701692 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f00e4ac9--9831--582f--92bc--f2b318630797-osd--block--f00e4ac9--9831--582f--92bc--f2b318630797', 'dm-uuid-LVM-99pp97M8vSiq1DcdfNowOmyxQeBHt2RQXSbZdQTdzI57JNQcp5rC1M7FuVrcNJ3v'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-03 15:40:57.701711 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2547461e--5dcb--5046--b3ed--0a182c83d3a8-osd--block--2547461e--5dcb--5046--b3ed--0a182c83d3a8', 'dm-uuid-LVM-9FhNrjVXl0cAWcs1aJgZ36y2TkiyPUOoyyVrKRwL2rhhw9kJzyHtCgnDt7vmQNTt'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-03 15:40:57.701740 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7e1af086-74b9-4b96-b1ab-e1589a6f5143', 'scsi-SQEMU_QEMU_HARDDISK_7e1af086-74b9-4b96-b1ab-e1589a6f5143'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7e1af086-74b9-4b96-b1ab-e1589a6f5143-part1', 'scsi-SQEMU_QEMU_HARDDISK_7e1af086-74b9-4b96-b1ab-e1589a6f5143-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7e1af086-74b9-4b96-b1ab-e1589a6f5143-part14', 'scsi-SQEMU_QEMU_HARDDISK_7e1af086-74b9-4b96-b1ab-e1589a6f5143-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7e1af086-74b9-4b96-b1ab-e1589a6f5143-part15', 'scsi-SQEMU_QEMU_HARDDISK_7e1af086-74b9-4b96-b1ab-e1589a6f5143-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7e1af086-74b9-4b96-b1ab-e1589a6f5143-part16', 'scsi-SQEMU_QEMU_HARDDISK_7e1af086-74b9-4b96-b1ab-e1589a6f5143-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-03 15:40:57.701788 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-03 15:40:57.701809 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--5a262827--4eba--5d37--ab06--09e1d49a835c-osd--block--5a262827--4eba--5d37--ab06--09e1d49a835c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-emfxfm-5qIT-TG7n-rhmg-KsOA-8KKz-w6ga7w', 'scsi-0QEMU_QEMU_HARDDISK_5c901d52-eede-42c5-873c-7ade3ca032e1', 'scsi-SQEMU_QEMU_HARDDISK_5c901d52-eede-42c5-873c-7ade3ca032e1'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-03 15:40:57.701831 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--d47078ac--4564--569b--bfa7--6d988d420f95-osd--block--d47078ac--4564--569b--bfa7--6d988d420f95'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-WBkiQf-THpz-Svwy-wmks-s5gt-2CGA-7xevri', 'scsi-0QEMU_QEMU_HARDDISK_b4ac7e97-dff3-4114-bb9f-c387d4fd8c04', 'scsi-SQEMU_QEMU_HARDDISK_b4ac7e97-dff3-4114-bb9f-c387d4fd8c04'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-03 15:40:57.701862 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-03 15:40:57.701881 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_61b072b3-0d8d-4456-975d-55fef61370d3', 'scsi-SQEMU_QEMU_HARDDISK_61b072b3-0d8d-4456-975d-55fef61370d3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-03 15:40:57.701900 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-03 15:40:57.701932 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-03-14-50-16-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-03 15:40:57.701954 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.701986 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-03 15:40:57.702006 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-03 15:40:57.702218 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-03 15:40:57.702234 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-03 15:40:57.702248 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--610c71bb--335d--5813--8d53--12327c30775e-osd--block--610c71bb--335d--5813--8d53--12327c30775e', 'dm-uuid-LVM-oBbrD2y50tGUGcJrG9aMf1XrpfBgDTcIpQggtVkRRBZCLEs5YgTracTrTruq7mo4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-03 15:40:57.702272 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-03 15:40:57.702284 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ae8860ce--b651--5449--9c0b--e6c018225b94-osd--block--ae8860ce--b651--5449--9c0b--e6c018225b94', 'dm-uuid-LVM-hyyfRBsGTLzhJDBnkMwP7oIAf3aNljpPZneZ7Y2rVIKSrikYC813zvaJkJ2cAlU8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-03 15:40:57.702317 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f6db3371-ad49-4dd9-a193-0ba30b3292ba', 'scsi-SQEMU_QEMU_HARDDISK_f6db3371-ad49-4dd9-a193-0ba30b3292ba'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f6db3371-ad49-4dd9-a193-0ba30b3292ba-part1', 'scsi-SQEMU_QEMU_HARDDISK_f6db3371-ad49-4dd9-a193-0ba30b3292ba-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f6db3371-ad49-4dd9-a193-0ba30b3292ba-part14', 'scsi-SQEMU_QEMU_HARDDISK_f6db3371-ad49-4dd9-a193-0ba30b3292ba-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f6db3371-ad49-4dd9-a193-0ba30b3292ba-part15', 'scsi-SQEMU_QEMU_HARDDISK_f6db3371-ad49-4dd9-a193-0ba30b3292ba-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f6db3371-ad49-4dd9-a193-0ba30b3292ba-part16', 'scsi-SQEMU_QEMU_HARDDISK_f6db3371-ad49-4dd9-a193-0ba30b3292ba-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-03 15:40:57.702331 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-03 15:40:57.702349 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--f00e4ac9--9831--582f--92bc--f2b318630797-osd--block--f00e4ac9--9831--582f--92bc--f2b318630797'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Ps2wUN-woOp-sUfc-DGCH-velx-EbWq-ZqQ5PA', 'scsi-0QEMU_QEMU_HARDDISK_88cf38eb-fdbf-404b-9f1d-cd32f6bedf4b', 'scsi-SQEMU_QEMU_HARDDISK_88cf38eb-fdbf-404b-9f1d-cd32f6bedf4b'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-03 15:40:57.702362 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--2547461e--5dcb--5046--b3ed--0a182c83d3a8-osd--block--2547461e--5dcb--5046--b3ed--0a182c83d3a8'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-gCazGF-eiF7-zfd2-va82-leUV-Ddn3-wTNAz7', 'scsi-0QEMU_QEMU_HARDDISK_35e8ec34-b9aa-4705-9105-50464be240ba', 'scsi-SQEMU_QEMU_HARDDISK_35e8ec34-b9aa-4705-9105-50464be240ba'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-03 15:40:57.702379 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-03 15:40:57.702406 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b1c5376b-f7c7-4aac-a0b2-3df8be7d9631', 'scsi-SQEMU_QEMU_HARDDISK_b1c5376b-f7c7-4aac-a0b2-3df8be7d9631'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-03 15:40:57.702419 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-03-14-50-21-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-03 15:40:57.702431 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-03 15:40:57.702443 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:57.702454 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-03 15:40:57.702480 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-03 15:40:57.702492 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-03 15:40:57.702503 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-03 15:40:57.702514 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-03 15:40:57.702625 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ec1efc19-1b1e-4f39-8db8-97e27f5004aa', 'scsi-SQEMU_QEMU_HARDDISK_ec1efc19-1b1e-4f39-8db8-97e27f5004aa'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ec1efc19-1b1e-4f39-8db8-97e27f5004aa-part1', 'scsi-SQEMU_QEMU_HARDDISK_ec1efc19-1b1e-4f39-8db8-97e27f5004aa-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ec1efc19-1b1e-4f39-8db8-97e27f5004aa-part14', 'scsi-SQEMU_QEMU_HARDDISK_ec1efc19-1b1e-4f39-8db8-97e27f5004aa-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ec1efc19-1b1e-4f39-8db8-97e27f5004aa-part15', 'scsi-SQEMU_QEMU_HARDDISK_ec1efc19-1b1e-4f39-8db8-97e27f5004aa-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ec1efc19-1b1e-4f39-8db8-97e27f5004aa-part16', 'scsi-SQEMU_QEMU_HARDDISK_ec1efc19-1b1e-4f39-8db8-97e27f5004aa-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-03 15:40:57.702664 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--610c71bb--335d--5813--8d53--12327c30775e-osd--block--610c71bb--335d--5813--8d53--12327c30775e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-C83Vqp-oUHH-KYth-6H1z-1jr1-Nk57-4zq1JG', 'scsi-0QEMU_QEMU_HARDDISK_fa411336-a154-4770-b6c1-ce8fec2c95f2', 'scsi-SQEMU_QEMU_HARDDISK_fa411336-a154-4770-b6c1-ce8fec2c95f2'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-03 15:40:57.702685 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--ae8860ce--b651--5449--9c0b--e6c018225b94-osd--block--ae8860ce--b651--5449--9c0b--e6c018225b94'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-P8VhYf-9wK2-3uzP-XeZ8-f6el-w4Mt-XILiP3', 'scsi-0QEMU_QEMU_HARDDISK_ffe2a0ca-5a38-47a9-803d-00b473435346', 'scsi-SQEMU_QEMU_HARDDISK_ffe2a0ca-5a38-47a9-803d-00b473435346'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-03 15:40:57.702705 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ed092372-9559-4d48-8a48-c44bdb9ee908', 'scsi-SQEMU_QEMU_HARDDISK_ed092372-9559-4d48-8a48-c44bdb9ee908'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-03 15:40:57.702733 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-03-14-50-23-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-03 15:40:57.702766 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:57.702784 | orchestrator | 2025-06-03 15:40:57.702803 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-06-03 15:40:57.702824 | orchestrator | Tuesday 03 June 2025 15:30:11 +0000 (0:00:02.085) 0:00:30.676 ********** 2025-06-03 15:40:57.702844 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:57.702877 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:57.702898 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:57.702917 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:57.702938 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:57.702966 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:57.702998 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:57.703020 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:57.703052 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8b65d546-d325-4a0d-b120-75afa88c00de', 'scsi-SQEMU_QEMU_HARDDISK_8b65d546-d325-4a0d-b120-75afa88c00de'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8b65d546-d325-4a0d-b120-75afa88c00de-part1', 'scsi-SQEMU_QEMU_HARDDISK_8b65d546-d325-4a0d-b120-75afa88c00de-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8b65d546-d325-4a0d-b120-75afa88c00de-part14', 'scsi-SQEMU_QEMU_HARDDISK_8b65d546-d325-4a0d-b120-75afa88c00de-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8b65d546-d325-4a0d-b120-75afa88c00de-part15', 'scsi-SQEMU_QEMU_HARDDISK_8b65d546-d325-4a0d-b120-75afa88c00de-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8b65d546-d325-4a0d-b120-75afa88c00de-part16', 'scsi-SQEMU_QEMU_HARDDISK_8b65d546-d325-4a0d-b120-75afa88c00de-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:57.703089 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-03-14-50-19-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:57.703112 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:57.703144 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:57.703162 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:57.703179 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:57.703196 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:57.703221 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:57.703250 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:57.703268 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:57.703296 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7cf15848-6f73-4878-927d-31873f9154b7', 'scsi-SQEMU_QEMU_HARDDISK_7cf15848-6f73-4878-927d-31873f9154b7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7cf15848-6f73-4878-927d-31873f9154b7-part1', 'scsi-SQEMU_QEMU_HARDDISK_7cf15848-6f73-4878-927d-31873f9154b7-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7cf15848-6f73-4878-927d-31873f9154b7-part14', 'scsi-SQEMU_QEMU_HARDDISK_7cf15848-6f73-4878-927d-31873f9154b7-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7cf15848-6f73-4878-927d-31873f9154b7-part15', 'scsi-SQEMU_QEMU_HARDDISK_7cf15848-6f73-4878-927d-31873f9154b7-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7cf15848-6f73-4878-927d-31873f9154b7-part16', 'scsi-SQEMU_QEMU_HARDDISK_7cf15848-6f73-4878-927d-31873f9154b7-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:57.703322 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-03-14-50-20-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:57.703340 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:57.703366 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:57.703393 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:57.703410 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:57.703427 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:57.703445 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:57.703462 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:57.703486 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:57.703514 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:57.703571 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:57.703590 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_420efcde-9aa9-4277-94e6-eff067055985', 'scsi-SQEMU_QEMU_HARDDISK_420efcde-9aa9-4277-94e6-eff067055985'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_420efcde-9aa9-4277-94e6-eff067055985-part1', 'scsi-SQEMU_QEMU_HARDDISK_420efcde-9aa9-4277-94e6-eff067055985-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_420efcde-9aa9-4277-94e6-eff067055985-part14', 'scsi-SQEMU_QEMU_HARDDISK_420efcde-9aa9-4277-94e6-eff067055985-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_420efcde-9aa9-4277-94e6-eff067055985-part15', 'scsi-SQEMU_QEMU_HARDDISK_420efcde-9aa9-4277-94e6-eff067055985-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_420efcde-9aa9-4277-94e6-eff067055985-part16', 'scsi-SQEMU_QEMU_HARDDISK_420efcde-9aa9-4277-94e6-eff067055985-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:57.703620 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-03-14-50-17-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:57.703646 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--5a262827--4eba--5d37--ab06--09e1d49a835c-osd--block--5a262827--4eba--5d37--ab06--09e1d49a835c', 'dm-uuid-LVM-iCTuAI4EJib0jwbvb8c4dXUAVjPvH6yyQD7EGdtmsu0AgRLszQFCT51KxWbLYqCJ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:57.703673 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d47078ac--4564--569b--bfa7--6d988d420f95-osd--block--d47078ac--4564--569b--bfa7--6d988d420f95', 'dm-uuid-LVM-MlNOD7DMw9sVFxWua6nlui2P6JGLIXhA9i9s0R6rxyRXeXmxqEKjHCeK1WnDSagY'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:57.703690 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:57.703705 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:57.703722 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:57.703739 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f00e4ac9--9831--582f--92bc--f2b318630797-osd--block--f00e4ac9--9831--582f--92bc--f2b318630797', 'dm-uuid-LVM-99pp97M8vSiq1DcdfNowOmyxQeBHt2RQXSbZdQTdzI57JNQcp5rC1M7FuVrcNJ3v'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:57.703762 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:57.703798 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2547461e--5dcb--5046--b3ed--0a182c83d3a8-osd--block--2547461e--5dcb--5046--b3ed--0a182c83d3a8', 'dm-uuid-LVM-9FhNrjVXl0cAWcs1aJgZ36y2TkiyPUOoyyVrKRwL2rhhw9kJzyHtCgnDt7vmQNTt'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:57.703815 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:57.703832 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:57.703849 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:57.703865 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:57.703890 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--610c71bb--335d--5813--8d53--12327c30775e-osd--block--610c71bb--335d--5813--8d53--12327c30775e', 'dm-uuid-LVM-oBbrD2y50tGUGcJrG9aMf1XrpfBgDTcIpQggtVkRRBZCLEs5YgTracTrTruq7mo4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:57.703925 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:57.703942 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:57.703958 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:57.703976 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:57.703994 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ae8860ce--b651--5449--9c0b--e6c018225b94-osd--block--ae8860ce--b651--5449--9c0b--e6c018225b94', 'dm-uuid-LVM-hyyfRBsGTLzhJDBnkMwP7oIAf3aNljpPZneZ7Y2rVIKSrikYC813zvaJkJ2cAlU8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:57.704012 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:57.704147 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7e1af086-74b9-4b96-b1ab-e1589a6f5143', 'scsi-SQEMU_QEMU_HARDDISK_7e1af086-74b9-4b96-b1ab-e1589a6f5143'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7e1af086-74b9-4b96-b1ab-e1589a6f5143-part1', 'scsi-SQEMU_QEMU_HARDDISK_7e1af086-74b9-4b96-b1ab-e1589a6f5143-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7e1af086-74b9-4b96-b1ab-e1589a6f5143-part14', 'scsi-SQEMU_QEMU_HARDDISK_7e1af086-74b9-4b96-b1ab-e1589a6f5143-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7e1af086-74b9-4b96-b1ab-e1589a6f5143-part15', 'scsi-SQEMU_QEMU_HARDDISK_7e1af086-74b9-4b96-b1ab-e1589a6f5143-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7e1af086-74b9-4b96-b1ab-e1589a6f5143-part16', 'scsi-SQEMU_QEMU_HARDDISK_7e1af086-74b9-4b96-b1ab-e1589a6f5143-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:57.704186 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:57.704202 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:57.704228 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--5a262827--4eba--5d37--ab06--09e1d49a835c-osd--block--5a262827--4eba--5d37--ab06--09e1d49a835c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-emfxfm-5qIT-TG7n-rhmg-KsOA-8KKz-w6ga7w', 'scsi-0QEMU_QEMU_HARDDISK_5c901d52-eede-42c5-873c-7ade3ca032e1', 'scsi-SQEMU_QEMU_HARDDISK_5c901d52-eede-42c5-873c-7ade3ca032e1'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:57.704268 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:57.704285 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:57.704303 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--d47078ac--4564--569b--bfa7--6d988d420f95-osd--block--d47078ac--4564--569b--bfa7--6d988d420f95'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-WBkiQf-THpz-Svwy-wmks-s5gt-2CGA-7xevri', 'scsi-0QEMU_QEMU_HARDDISK_b4ac7e97-dff3-4114-bb9f-c387d4fd8c04', 'scsi-SQEMU_QEMU_HARDDISK_b4ac7e97-dff3-4114-bb9f-c387d4fd8c04'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:57.704320 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:57.704336 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_61b072b3-0d8d-4456-975d-55fef61370d3', 'scsi-SQEMU_QEMU_HARDDISK_61b072b3-0d8d-4456-975d-55fef61370d3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:57.704379 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:57.704396 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:57.704413 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-03-14-50-16-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:57.704424 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.704440 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f6db3371-ad49-4dd9-a193-0ba30b3292ba', 'scsi-SQEMU_QEMU_HARDDISK_f6db3371-ad49-4dd9-a193-0ba30b3292ba'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f6db3371-ad49-4dd9-a193-0ba30b3292ba-part1', 'scsi-SQEMU_QEMU_HARDDISK_f6db3371-ad49-4dd9-a193-0ba30b3292ba-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f6db3371-ad49-4dd9-a193-0ba30b3292ba-part14', 'scsi-SQEMU_QEMU_HARDDISK_f6db3371-ad49-4dd9-a193-0ba30b3292ba-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f6db3371-ad49-4dd9-a193-0ba30b3292ba-part15', 'scsi-SQEMU_QEMU_HARDDISK_f6db3371-ad49-4dd9-a193-0ba30b3292ba-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f6db3371-ad49-4dd9-a193-0ba30b3292ba-part16', 'scsi-SQEMU_QEMU_HARDDISK_f6db3371-ad49-4dd9-a193-0ba30b3292ba-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:57.704465 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--f00e4ac9--9831--582f--92bc--f2b318630797-osd--block--f00e4ac9--9831--582f--92bc--f2b318630797'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Ps2wUN-woOp-sUfc-DGCH-velx-EbWq-ZqQ5PA', 'scsi-0QEMU_QEMU_HARDDISK_88cf38eb-fdbf-404b-9f1d-cd32f6bedf4b', 'scsi-SQEMU_QEMU_HARDDISK_88cf38eb-fdbf-404b-9f1d-cd32f6bedf4b'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:57.704477 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--2547461e--5dcb--5046--b3ed--0a182c83d3a8-osd--block--2547461e--5dcb--5046--b3ed--0a182c83d3a8'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-gCazGF-eiF7-zfd2-va82-leUV-Ddn3-wTNAz7', 'scsi-0QEMU_QEMU_HARDDISK_35e8ec34-b9aa-4705-9105-50464be240ba', 'scsi-SQEMU_QEMU_HARDDISK_35e8ec34-b9aa-4705-9105-50464be240ba'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:57.704487 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:57.704498 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b1c5376b-f7c7-4aac-a0b2-3df8be7d9631', 'scsi-SQEMU_QEMU_HARDDISK_b1c5376b-f7c7-4aac-a0b2-3df8be7d9631'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:57.704518 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:57.704564 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-03-14-50-21-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:57.704575 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:57.704586 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:57.704596 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:57.704606 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:57.704809 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ec1efc19-1b1e-4f39-8db8-97e27f5004aa', 'scsi-SQEMU_QEMU_HARDDISK_ec1efc19-1b1e-4f39-8db8-97e27f5004aa'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ec1efc19-1b1e-4f39-8db8-97e27f5004aa-part1', 'scsi-SQEMU_QEMU_HARDDISK_ec1efc19-1b1e-4f39-8db8-97e27f5004aa-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ec1efc19-1b1e-4f39-8db8-97e27f5004aa-part14', 'scsi-SQEMU_QEMU_HARDDISK_ec1efc19-1b1e-4f39-8db8-97e27f5004aa-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ec1efc19-1b1e-4f39-8db8-97e27f5004aa-part15', 'scsi-SQEMU_QEMU_HARDDISK_ec1efc19-1b1e-4f39-8db8-97e27f5004aa-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ec1efc19-1b1e-4f39-8db8-97e27f5004aa-part16', 'scsi-SQEMU_QEMU_HARDDISK_ec1efc19-1b1e-4f39-8db8-97e27f5004aa-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:57.704838 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--610c71bb--335d--5813--8d53--12327c30775e-osd--block--610c71bb--335d--5813--8d53--12327c30775e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-C83Vqp-oUHH-KYth-6H1z-1jr1-Nk57-4zq1JG', 'scsi-0QEMU_QEMU_HARDDISK_fa411336-a154-4770-b6c1-ce8fec2c95f2', 'scsi-SQEMU_QEMU_HARDDISK_fa411336-a154-4770-b6c1-ce8fec2c95f2'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:57.704849 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--ae8860ce--b651--5449--9c0b--e6c018225b94-osd--block--ae8860ce--b651--5449--9c0b--e6c018225b94'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-P8VhYf-9wK2-3uzP-XeZ8-f6el-w4Mt-XILiP3', 'scsi-0QEMU_QEMU_HARDDISK_ffe2a0ca-5a38-47a9-803d-00b473435346', 'scsi-SQEMU_QEMU_HARDDISK_ffe2a0ca-5a38-47a9-803d-00b473435346'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:57.704860 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ed092372-9559-4d48-8a48-c44bdb9ee908', 'scsi-SQEMU_QEMU_HARDDISK_ed092372-9559-4d48-8a48-c44bdb9ee908'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:57.704881 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-03-14-50-23-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:40:57.704891 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:57.704902 | orchestrator | 2025-06-03 15:40:57.704912 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-06-03 15:40:57.704923 | orchestrator | Tuesday 03 June 2025 15:30:13 +0000 (0:00:02.115) 0:00:32.792 ********** 2025-06-03 15:40:57.704935 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:40:57.704952 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:40:57.704968 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:40:57.704993 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:57.705009 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:57.705026 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:57.705042 | orchestrator | 2025-06-03 15:40:57.705059 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-06-03 15:40:57.705073 | orchestrator | Tuesday 03 June 2025 15:30:15 +0000 (0:00:01.466) 0:00:34.258 ********** 2025-06-03 15:40:57.705083 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:40:57.705093 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:40:57.705103 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:40:57.705112 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:57.705121 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:57.705131 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:57.705140 | orchestrator | 2025-06-03 15:40:57.705150 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-06-03 15:40:57.705161 | orchestrator | Tuesday 03 June 2025 15:30:16 +0000 (0:00:00.788) 0:00:35.047 ********** 2025-06-03 15:40:57.705171 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:57.705181 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:57.705190 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:57.705200 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.705209 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:57.705220 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:57.705230 | orchestrator | 2025-06-03 15:40:57.705240 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-06-03 15:40:57.705249 | orchestrator | Tuesday 03 June 2025 15:30:16 +0000 (0:00:00.936) 0:00:35.984 ********** 2025-06-03 15:40:57.705259 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:57.705269 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:57.705278 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:57.705288 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.705297 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:57.705307 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:57.705318 | orchestrator | 2025-06-03 15:40:57.705335 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-06-03 15:40:57.705350 | orchestrator | Tuesday 03 June 2025 15:30:17 +0000 (0:00:00.584) 0:00:36.569 ********** 2025-06-03 15:40:57.705367 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:57.705382 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:57.705412 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:57.705429 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.705446 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:57.705462 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:57.705479 | orchestrator | 2025-06-03 15:40:57.705496 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-06-03 15:40:57.705512 | orchestrator | Tuesday 03 June 2025 15:30:18 +0000 (0:00:00.808) 0:00:37.378 ********** 2025-06-03 15:40:57.705554 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:57.705571 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:57.705588 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:57.705605 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.705621 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:57.705638 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:57.705654 | orchestrator | 2025-06-03 15:40:57.705670 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-06-03 15:40:57.705687 | orchestrator | Tuesday 03 June 2025 15:30:19 +0000 (0:00:00.996) 0:00:38.374 ********** 2025-06-03 15:40:57.705705 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-06-03 15:40:57.705723 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-06-03 15:40:57.705740 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2025-06-03 15:40:57.705757 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-06-03 15:40:57.705773 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2025-06-03 15:40:57.705789 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-06-03 15:40:57.705804 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2025-06-03 15:40:57.705820 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-06-03 15:40:57.705835 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-06-03 15:40:57.705852 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2025-06-03 15:40:57.705867 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-06-03 15:40:57.705885 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-06-03 15:40:57.705901 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-06-03 15:40:57.705917 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2025-06-03 15:40:57.705933 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2025-06-03 15:40:57.705949 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-06-03 15:40:57.705965 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-06-03 15:40:57.705981 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-06-03 15:40:57.705997 | orchestrator | 2025-06-03 15:40:57.706069 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-06-03 15:40:57.706093 | orchestrator | Tuesday 03 June 2025 15:30:22 +0000 (0:00:02.799) 0:00:41.174 ********** 2025-06-03 15:40:57.706109 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-06-03 15:40:57.706125 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-06-03 15:40:57.706142 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-06-03 15:40:57.706175 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:57.706192 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-06-03 15:40:57.706208 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-06-03 15:40:57.706224 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-06-03 15:40:57.706240 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-06-03 15:40:57.706256 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-06-03 15:40:57.706271 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-06-03 15:40:57.706288 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:57.706304 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-06-03 15:40:57.706345 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-06-03 15:40:57.706377 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-06-03 15:40:57.706393 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:57.706409 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-06-03 15:40:57.706425 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-06-03 15:40:57.706442 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-06-03 15:40:57.706459 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.706475 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:57.706490 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-06-03 15:40:57.706507 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-06-03 15:40:57.706562 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-06-03 15:40:57.706582 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:57.706600 | orchestrator | 2025-06-03 15:40:57.706616 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-06-03 15:40:57.706634 | orchestrator | Tuesday 03 June 2025 15:30:22 +0000 (0:00:00.764) 0:00:41.938 ********** 2025-06-03 15:40:57.706650 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:57.706667 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:57.706684 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:57.706700 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-03 15:40:57.706717 | orchestrator | 2025-06-03 15:40:57.706733 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-06-03 15:40:57.706751 | orchestrator | Tuesday 03 June 2025 15:30:24 +0000 (0:00:01.322) 0:00:43.261 ********** 2025-06-03 15:40:57.706768 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.706784 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:57.706799 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:57.706814 | orchestrator | 2025-06-03 15:40:57.706829 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-06-03 15:40:57.706846 | orchestrator | Tuesday 03 June 2025 15:30:24 +0000 (0:00:00.482) 0:00:43.743 ********** 2025-06-03 15:40:57.706861 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.706877 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:57.706895 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:57.706912 | orchestrator | 2025-06-03 15:40:57.706930 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-06-03 15:40:57.706946 | orchestrator | Tuesday 03 June 2025 15:30:25 +0000 (0:00:00.501) 0:00:44.245 ********** 2025-06-03 15:40:57.706962 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.706980 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:57.706998 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:57.707016 | orchestrator | 2025-06-03 15:40:57.707032 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-06-03 15:40:57.707048 | orchestrator | Tuesday 03 June 2025 15:30:25 +0000 (0:00:00.305) 0:00:44.551 ********** 2025-06-03 15:40:57.707065 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:57.707082 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:57.707099 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:57.707114 | orchestrator | 2025-06-03 15:40:57.707131 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-06-03 15:40:57.707148 | orchestrator | Tuesday 03 June 2025 15:30:25 +0000 (0:00:00.361) 0:00:44.913 ********** 2025-06-03 15:40:57.707165 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-03 15:40:57.707181 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-03 15:40:57.707198 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-03 15:40:57.707216 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.707234 | orchestrator | 2025-06-03 15:40:57.707251 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-06-03 15:40:57.707288 | orchestrator | Tuesday 03 June 2025 15:30:26 +0000 (0:00:00.470) 0:00:45.383 ********** 2025-06-03 15:40:57.707308 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-03 15:40:57.707327 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-03 15:40:57.707345 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-03 15:40:57.707362 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.707379 | orchestrator | 2025-06-03 15:40:57.707396 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-06-03 15:40:57.707415 | orchestrator | Tuesday 03 June 2025 15:30:26 +0000 (0:00:00.366) 0:00:45.749 ********** 2025-06-03 15:40:57.707433 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-03 15:40:57.707452 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-03 15:40:57.707471 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-03 15:40:57.707490 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.707507 | orchestrator | 2025-06-03 15:40:57.707551 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-06-03 15:40:57.707569 | orchestrator | Tuesday 03 June 2025 15:30:27 +0000 (0:00:00.611) 0:00:46.361 ********** 2025-06-03 15:40:57.707585 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:57.707612 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:57.707628 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:57.707643 | orchestrator | 2025-06-03 15:40:57.707659 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-06-03 15:40:57.707675 | orchestrator | Tuesday 03 June 2025 15:30:27 +0000 (0:00:00.396) 0:00:46.757 ********** 2025-06-03 15:40:57.707690 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-06-03 15:40:57.707707 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-06-03 15:40:57.707722 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-06-03 15:40:57.707738 | orchestrator | 2025-06-03 15:40:57.707755 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-06-03 15:40:57.707769 | orchestrator | Tuesday 03 June 2025 15:30:28 +0000 (0:00:00.539) 0:00:47.297 ********** 2025-06-03 15:40:57.707802 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-06-03 15:40:57.707818 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-03 15:40:57.707834 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-03 15:40:57.707850 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-06-03 15:40:57.707866 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-06-03 15:40:57.707881 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-06-03 15:40:57.707898 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-06-03 15:40:57.707914 | orchestrator | 2025-06-03 15:40:57.707929 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-06-03 15:40:57.707944 | orchestrator | Tuesday 03 June 2025 15:30:28 +0000 (0:00:00.675) 0:00:47.972 ********** 2025-06-03 15:40:57.707960 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-06-03 15:40:57.707976 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-03 15:40:57.707990 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-03 15:40:57.708006 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-06-03 15:40:57.708022 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-06-03 15:40:57.708038 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-06-03 15:40:57.708053 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-06-03 15:40:57.708084 | orchestrator | 2025-06-03 15:40:57.708100 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-06-03 15:40:57.708116 | orchestrator | Tuesday 03 June 2025 15:30:31 +0000 (0:00:02.227) 0:00:50.200 ********** 2025-06-03 15:40:57.708133 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-03 15:40:57.708151 | orchestrator | 2025-06-03 15:40:57.708167 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-06-03 15:40:57.708182 | orchestrator | Tuesday 03 June 2025 15:30:32 +0000 (0:00:01.653) 0:00:51.854 ********** 2025-06-03 15:40:57.708198 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-03 15:40:57.708215 | orchestrator | 2025-06-03 15:40:57.708230 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-06-03 15:40:57.708246 | orchestrator | Tuesday 03 June 2025 15:30:34 +0000 (0:00:01.185) 0:00:53.039 ********** 2025-06-03 15:40:57.708263 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:40:57.708280 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.708296 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:57.708313 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:40:57.708323 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:57.708333 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:40:57.708343 | orchestrator | 2025-06-03 15:40:57.708353 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-06-03 15:40:57.708362 | orchestrator | Tuesday 03 June 2025 15:30:34 +0000 (0:00:00.926) 0:00:53.966 ********** 2025-06-03 15:40:57.708372 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:57.708381 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:57.708391 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:57.708400 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:57.708410 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:57.708420 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:57.708429 | orchestrator | 2025-06-03 15:40:57.708439 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-06-03 15:40:57.708448 | orchestrator | Tuesday 03 June 2025 15:30:36 +0000 (0:00:01.357) 0:00:55.323 ********** 2025-06-03 15:40:57.708457 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:57.708467 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:57.708477 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:57.708486 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:57.708496 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:57.708506 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:57.708515 | orchestrator | 2025-06-03 15:40:57.708552 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-06-03 15:40:57.708570 | orchestrator | Tuesday 03 June 2025 15:30:38 +0000 (0:00:01.830) 0:00:57.154 ********** 2025-06-03 15:40:57.708580 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:57.708589 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:57.708599 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:57.708609 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:57.708618 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:57.708628 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:57.708637 | orchestrator | 2025-06-03 15:40:57.708655 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-06-03 15:40:57.708665 | orchestrator | Tuesday 03 June 2025 15:30:39 +0000 (0:00:01.290) 0:00:58.445 ********** 2025-06-03 15:40:57.708675 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:40:57.708685 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.708694 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:40:57.708704 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:57.708713 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:57.708723 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:40:57.708742 | orchestrator | 2025-06-03 15:40:57.708751 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-06-03 15:40:57.708761 | orchestrator | Tuesday 03 June 2025 15:30:40 +0000 (0:00:01.084) 0:00:59.529 ********** 2025-06-03 15:40:57.708782 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:57.708793 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:57.708802 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:57.708812 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.708822 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:57.708831 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:57.708841 | orchestrator | 2025-06-03 15:40:57.708850 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-06-03 15:40:57.708860 | orchestrator | Tuesday 03 June 2025 15:30:41 +0000 (0:00:00.629) 0:01:00.158 ********** 2025-06-03 15:40:57.708869 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:57.708879 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:57.708889 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:57.708898 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.708908 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:57.708918 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:57.708927 | orchestrator | 2025-06-03 15:40:57.708943 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-06-03 15:40:57.708960 | orchestrator | Tuesday 03 June 2025 15:30:42 +0000 (0:00:00.913) 0:01:01.071 ********** 2025-06-03 15:40:57.708976 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:40:57.708992 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:40:57.709009 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:40:57.709026 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:57.709043 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:57.709059 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:57.709073 | orchestrator | 2025-06-03 15:40:57.709083 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-06-03 15:40:57.709094 | orchestrator | Tuesday 03 June 2025 15:30:43 +0000 (0:00:01.398) 0:01:02.470 ********** 2025-06-03 15:40:57.709103 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:40:57.709113 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:40:57.709122 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:40:57.709132 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:57.709141 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:57.709150 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:57.709160 | orchestrator | 2025-06-03 15:40:57.709170 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-06-03 15:40:57.709180 | orchestrator | Tuesday 03 June 2025 15:30:44 +0000 (0:00:01.517) 0:01:03.988 ********** 2025-06-03 15:40:57.709189 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:57.709199 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:57.709208 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:57.709218 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.709228 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:57.709238 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:57.709247 | orchestrator | 2025-06-03 15:40:57.709257 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-06-03 15:40:57.709266 | orchestrator | Tuesday 03 June 2025 15:30:45 +0000 (0:00:00.712) 0:01:04.700 ********** 2025-06-03 15:40:57.709276 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:40:57.709286 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:40:57.709296 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:40:57.709306 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.709315 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:57.709325 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:57.709335 | orchestrator | 2025-06-03 15:40:57.709344 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-06-03 15:40:57.709354 | orchestrator | Tuesday 03 June 2025 15:30:46 +0000 (0:00:00.791) 0:01:05.492 ********** 2025-06-03 15:40:57.709372 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:57.709382 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:57.709392 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:57.709402 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:57.709412 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:57.709421 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:57.709431 | orchestrator | 2025-06-03 15:40:57.709440 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-06-03 15:40:57.709450 | orchestrator | Tuesday 03 June 2025 15:30:47 +0000 (0:00:00.627) 0:01:06.119 ********** 2025-06-03 15:40:57.709459 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:57.709469 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:57.709478 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:57.709488 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:57.709498 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:57.709507 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:57.709517 | orchestrator | 2025-06-03 15:40:57.709597 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-06-03 15:40:57.709614 | orchestrator | Tuesday 03 June 2025 15:30:47 +0000 (0:00:00.689) 0:01:06.808 ********** 2025-06-03 15:40:57.709631 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:57.709649 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:57.709666 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:57.709683 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:57.709700 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:57.709717 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:57.709728 | orchestrator | 2025-06-03 15:40:57.709738 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-06-03 15:40:57.709747 | orchestrator | Tuesday 03 June 2025 15:30:48 +0000 (0:00:00.572) 0:01:07.381 ********** 2025-06-03 15:40:57.709757 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:57.709767 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:57.709776 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:57.709786 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.709795 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:57.709804 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:57.709814 | orchestrator | 2025-06-03 15:40:57.709830 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-06-03 15:40:57.709840 | orchestrator | Tuesday 03 June 2025 15:30:49 +0000 (0:00:00.675) 0:01:08.057 ********** 2025-06-03 15:40:57.709850 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:57.709860 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:57.709869 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:57.709879 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.709888 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:57.709898 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:57.709907 | orchestrator | 2025-06-03 15:40:57.709917 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-06-03 15:40:57.709938 | orchestrator | Tuesday 03 June 2025 15:30:49 +0000 (0:00:00.487) 0:01:08.545 ********** 2025-06-03 15:40:57.709956 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:40:57.709972 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:40:57.709988 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:40:57.710006 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.710064 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:57.710075 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:57.710085 | orchestrator | 2025-06-03 15:40:57.710094 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-06-03 15:40:57.710104 | orchestrator | Tuesday 03 June 2025 15:30:50 +0000 (0:00:00.648) 0:01:09.193 ********** 2025-06-03 15:40:57.710114 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:40:57.710123 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:40:57.710133 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:40:57.710142 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:57.710160 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:57.710168 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:57.710176 | orchestrator | 2025-06-03 15:40:57.710184 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-06-03 15:40:57.710191 | orchestrator | Tuesday 03 June 2025 15:30:50 +0000 (0:00:00.541) 0:01:09.735 ********** 2025-06-03 15:40:57.710199 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:40:57.710207 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:40:57.710215 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:40:57.710223 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:57.710230 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:57.710238 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:57.710246 | orchestrator | 2025-06-03 15:40:57.710254 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2025-06-03 15:40:57.710262 | orchestrator | Tuesday 03 June 2025 15:30:51 +0000 (0:00:01.158) 0:01:10.893 ********** 2025-06-03 15:40:57.710270 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:40:57.710278 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:40:57.710286 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:40:57.710294 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:40:57.710301 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:40:57.710309 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:40:57.710317 | orchestrator | 2025-06-03 15:40:57.710324 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2025-06-03 15:40:57.710332 | orchestrator | Tuesday 03 June 2025 15:30:53 +0000 (0:00:01.694) 0:01:12.588 ********** 2025-06-03 15:40:57.710340 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:40:57.710348 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:40:57.710356 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:40:57.710364 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:40:57.710371 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:40:57.710379 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:40:57.710387 | orchestrator | 2025-06-03 15:40:57.710394 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2025-06-03 15:40:57.710402 | orchestrator | Tuesday 03 June 2025 15:30:55 +0000 (0:00:02.036) 0:01:14.624 ********** 2025-06-03 15:40:57.710410 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-03 15:40:57.710419 | orchestrator | 2025-06-03 15:40:57.710427 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2025-06-03 15:40:57.710434 | orchestrator | Tuesday 03 June 2025 15:30:56 +0000 (0:00:01.223) 0:01:15.847 ********** 2025-06-03 15:40:57.710442 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:57.710450 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:57.710458 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:57.710466 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.710473 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:57.710481 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:57.710489 | orchestrator | 2025-06-03 15:40:57.710497 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2025-06-03 15:40:57.710504 | orchestrator | Tuesday 03 June 2025 15:30:57 +0000 (0:00:00.817) 0:01:16.665 ********** 2025-06-03 15:40:57.710512 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:57.710520 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:57.710553 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:57.710568 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.710581 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:57.710593 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:57.710607 | orchestrator | 2025-06-03 15:40:57.710616 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2025-06-03 15:40:57.710624 | orchestrator | Tuesday 03 June 2025 15:30:58 +0000 (0:00:00.541) 0:01:17.207 ********** 2025-06-03 15:40:57.710631 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-06-03 15:40:57.710646 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-06-03 15:40:57.710654 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-06-03 15:40:57.710662 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-06-03 15:40:57.710670 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-06-03 15:40:57.710677 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-06-03 15:40:57.710690 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-06-03 15:40:57.710698 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-06-03 15:40:57.710706 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-06-03 15:40:57.710714 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-06-03 15:40:57.710721 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-06-03 15:40:57.710729 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-06-03 15:40:57.710737 | orchestrator | 2025-06-03 15:40:57.710760 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2025-06-03 15:40:57.710769 | orchestrator | Tuesday 03 June 2025 15:31:00 +0000 (0:00:01.809) 0:01:19.016 ********** 2025-06-03 15:40:57.710776 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:40:57.710784 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:40:57.710792 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:40:57.710800 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:40:57.710808 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:40:57.710816 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:40:57.710824 | orchestrator | 2025-06-03 15:40:57.710832 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2025-06-03 15:40:57.710840 | orchestrator | Tuesday 03 June 2025 15:31:00 +0000 (0:00:00.935) 0:01:19.951 ********** 2025-06-03 15:40:57.710847 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:57.710855 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:57.710863 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:57.710871 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.710879 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:57.710887 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:57.710894 | orchestrator | 2025-06-03 15:40:57.710902 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2025-06-03 15:40:57.710910 | orchestrator | Tuesday 03 June 2025 15:31:01 +0000 (0:00:00.956) 0:01:20.908 ********** 2025-06-03 15:40:57.710918 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:57.710926 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:57.710936 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:57.710950 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.710963 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:57.710976 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:57.710988 | orchestrator | 2025-06-03 15:40:57.711001 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2025-06-03 15:40:57.711015 | orchestrator | Tuesday 03 June 2025 15:31:02 +0000 (0:00:00.608) 0:01:21.516 ********** 2025-06-03 15:40:57.711028 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:57.711043 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:57.711051 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:57.711059 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.711067 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:57.711075 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:57.711082 | orchestrator | 2025-06-03 15:40:57.711090 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2025-06-03 15:40:57.711105 | orchestrator | Tuesday 03 June 2025 15:31:03 +0000 (0:00:00.791) 0:01:22.308 ********** 2025-06-03 15:40:57.711114 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-03 15:40:57.711122 | orchestrator | 2025-06-03 15:40:57.711130 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2025-06-03 15:40:57.711138 | orchestrator | Tuesday 03 June 2025 15:31:04 +0000 (0:00:01.158) 0:01:23.466 ********** 2025-06-03 15:40:57.711146 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:40:57.711154 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:40:57.711162 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:57.711170 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:57.711177 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:57.711185 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:40:57.711194 | orchestrator | 2025-06-03 15:40:57.711201 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2025-06-03 15:40:57.711209 | orchestrator | Tuesday 03 June 2025 15:32:22 +0000 (0:01:17.677) 0:02:41.143 ********** 2025-06-03 15:40:57.711217 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-06-03 15:40:57.711226 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2025-06-03 15:40:57.711234 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2025-06-03 15:40:57.711241 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:57.711249 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-06-03 15:40:57.711257 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2025-06-03 15:40:57.711265 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2025-06-03 15:40:57.711273 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:57.711281 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-06-03 15:40:57.711289 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2025-06-03 15:40:57.711296 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2025-06-03 15:40:57.711304 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:57.711312 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-06-03 15:40:57.711320 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2025-06-03 15:40:57.711328 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2025-06-03 15:40:57.711335 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.711348 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-06-03 15:40:57.711357 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2025-06-03 15:40:57.711365 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2025-06-03 15:40:57.711373 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:57.711381 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-06-03 15:40:57.711389 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2025-06-03 15:40:57.711398 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2025-06-03 15:40:57.711412 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:57.711420 | orchestrator | 2025-06-03 15:40:57.711428 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2025-06-03 15:40:57.711435 | orchestrator | Tuesday 03 June 2025 15:32:23 +0000 (0:00:01.028) 0:02:42.172 ********** 2025-06-03 15:40:57.711443 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:57.711451 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:57.711459 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:57.711467 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.711481 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:57.711489 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:57.711497 | orchestrator | 2025-06-03 15:40:57.711505 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2025-06-03 15:40:57.711513 | orchestrator | Tuesday 03 June 2025 15:32:23 +0000 (0:00:00.681) 0:02:42.854 ********** 2025-06-03 15:40:57.711521 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:57.711549 | orchestrator | 2025-06-03 15:40:57.711558 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2025-06-03 15:40:57.711566 | orchestrator | Tuesday 03 June 2025 15:32:23 +0000 (0:00:00.141) 0:02:42.995 ********** 2025-06-03 15:40:57.711574 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:57.711582 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:57.711590 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:57.711598 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.711607 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:57.711615 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:57.711622 | orchestrator | 2025-06-03 15:40:57.711631 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2025-06-03 15:40:57.711639 | orchestrator | Tuesday 03 June 2025 15:32:25 +0000 (0:00:01.315) 0:02:44.310 ********** 2025-06-03 15:40:57.711647 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:57.711655 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:57.711663 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:57.711671 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.711679 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:57.711687 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:57.711695 | orchestrator | 2025-06-03 15:40:57.711703 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2025-06-03 15:40:57.711711 | orchestrator | Tuesday 03 June 2025 15:32:26 +0000 (0:00:01.076) 0:02:45.387 ********** 2025-06-03 15:40:57.711719 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:57.711727 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:57.711735 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:57.711743 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.711751 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:57.711759 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:57.711767 | orchestrator | 2025-06-03 15:40:57.711775 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2025-06-03 15:40:57.711784 | orchestrator | Tuesday 03 June 2025 15:32:27 +0000 (0:00:01.243) 0:02:46.630 ********** 2025-06-03 15:40:57.711792 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:40:57.711799 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:40:57.711808 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:40:57.711816 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:57.711824 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:57.711832 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:57.711840 | orchestrator | 2025-06-03 15:40:57.711848 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2025-06-03 15:40:57.711855 | orchestrator | Tuesday 03 June 2025 15:32:30 +0000 (0:00:02.789) 0:02:49.419 ********** 2025-06-03 15:40:57.711863 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:40:57.711871 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:40:57.711879 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:40:57.711887 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:57.711896 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:57.711904 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:57.711912 | orchestrator | 2025-06-03 15:40:57.711920 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2025-06-03 15:40:57.711928 | orchestrator | Tuesday 03 June 2025 15:32:31 +0000 (0:00:00.859) 0:02:50.278 ********** 2025-06-03 15:40:57.711940 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-03 15:40:57.711970 | orchestrator | 2025-06-03 15:40:57.711985 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2025-06-03 15:40:57.711999 | orchestrator | Tuesday 03 June 2025 15:32:32 +0000 (0:00:01.334) 0:02:51.613 ********** 2025-06-03 15:40:57.712013 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:57.712027 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:57.712036 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:57.712044 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.712052 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:57.712060 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:57.712067 | orchestrator | 2025-06-03 15:40:57.712076 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2025-06-03 15:40:57.712084 | orchestrator | Tuesday 03 June 2025 15:32:33 +0000 (0:00:00.625) 0:02:52.238 ********** 2025-06-03 15:40:57.712092 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:57.712100 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:57.712107 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:57.712115 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.712130 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:57.712138 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:57.712146 | orchestrator | 2025-06-03 15:40:57.712154 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2025-06-03 15:40:57.712162 | orchestrator | Tuesday 03 June 2025 15:32:33 +0000 (0:00:00.770) 0:02:53.008 ********** 2025-06-03 15:40:57.712170 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:57.712178 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:57.712185 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:57.712193 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.712201 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:57.712208 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:57.712216 | orchestrator | 2025-06-03 15:40:57.712224 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2025-06-03 15:40:57.712241 | orchestrator | Tuesday 03 June 2025 15:32:34 +0000 (0:00:00.585) 0:02:53.594 ********** 2025-06-03 15:40:57.712250 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:57.712258 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:57.712266 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:57.712273 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.712282 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:57.712290 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:57.712297 | orchestrator | 2025-06-03 15:40:57.712305 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2025-06-03 15:40:57.712313 | orchestrator | Tuesday 03 June 2025 15:32:35 +0000 (0:00:00.857) 0:02:54.452 ********** 2025-06-03 15:40:57.712321 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:57.712329 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:57.712337 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:57.712345 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.712352 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:57.712360 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:57.712368 | orchestrator | 2025-06-03 15:40:57.712377 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2025-06-03 15:40:57.712384 | orchestrator | Tuesday 03 June 2025 15:32:36 +0000 (0:00:00.571) 0:02:55.023 ********** 2025-06-03 15:40:57.712392 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:57.712400 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:57.712408 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:57.712416 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.712424 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:57.712431 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:57.712439 | orchestrator | 2025-06-03 15:40:57.712447 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2025-06-03 15:40:57.712455 | orchestrator | Tuesday 03 June 2025 15:32:36 +0000 (0:00:00.809) 0:02:55.833 ********** 2025-06-03 15:40:57.712474 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:57.712482 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:57.712490 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:57.712498 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.712506 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:57.712514 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:57.712521 | orchestrator | 2025-06-03 15:40:57.712577 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2025-06-03 15:40:57.712586 | orchestrator | Tuesday 03 June 2025 15:32:37 +0000 (0:00:00.672) 0:02:56.506 ********** 2025-06-03 15:40:57.712594 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:57.712602 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:57.712610 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:57.712618 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.712626 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:57.712634 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:57.712641 | orchestrator | 2025-06-03 15:40:57.712649 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2025-06-03 15:40:57.712657 | orchestrator | Tuesday 03 June 2025 15:32:38 +0000 (0:00:00.856) 0:02:57.363 ********** 2025-06-03 15:40:57.712665 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:40:57.712673 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:40:57.712681 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:40:57.712688 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:57.712696 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:57.712704 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:57.712712 | orchestrator | 2025-06-03 15:40:57.712719 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2025-06-03 15:40:57.712727 | orchestrator | Tuesday 03 June 2025 15:32:39 +0000 (0:00:01.338) 0:02:58.701 ********** 2025-06-03 15:40:57.712735 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-03 15:40:57.712744 | orchestrator | 2025-06-03 15:40:57.712752 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2025-06-03 15:40:57.712760 | orchestrator | Tuesday 03 June 2025 15:32:41 +0000 (0:00:01.456) 0:03:00.157 ********** 2025-06-03 15:40:57.712768 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2025-06-03 15:40:57.712776 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2025-06-03 15:40:57.712784 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2025-06-03 15:40:57.712792 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2025-06-03 15:40:57.712800 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2025-06-03 15:40:57.712807 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2025-06-03 15:40:57.712816 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2025-06-03 15:40:57.712822 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2025-06-03 15:40:57.712829 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2025-06-03 15:40:57.712836 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2025-06-03 15:40:57.712842 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2025-06-03 15:40:57.712849 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2025-06-03 15:40:57.712855 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2025-06-03 15:40:57.712868 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2025-06-03 15:40:57.712875 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2025-06-03 15:40:57.712882 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2025-06-03 15:40:57.712888 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2025-06-03 15:40:57.712895 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2025-06-03 15:40:57.712908 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2025-06-03 15:40:57.712914 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2025-06-03 15:40:57.712921 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2025-06-03 15:40:57.712937 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2025-06-03 15:40:57.712948 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2025-06-03 15:40:57.712960 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2025-06-03 15:40:57.712971 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2025-06-03 15:40:57.712982 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2025-06-03 15:40:57.712994 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2025-06-03 15:40:57.713005 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2025-06-03 15:40:57.713016 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2025-06-03 15:40:57.713028 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2025-06-03 15:40:57.713035 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2025-06-03 15:40:57.713042 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2025-06-03 15:40:57.713049 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2025-06-03 15:40:57.713056 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2025-06-03 15:40:57.713063 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2025-06-03 15:40:57.713070 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2025-06-03 15:40:57.713076 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2025-06-03 15:40:57.713083 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2025-06-03 15:40:57.713090 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2025-06-03 15:40:57.713096 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2025-06-03 15:40:57.713103 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2025-06-03 15:40:57.713110 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2025-06-03 15:40:57.713116 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2025-06-03 15:40:57.713123 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2025-06-03 15:40:57.713130 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2025-06-03 15:40:57.713136 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2025-06-03 15:40:57.713143 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2025-06-03 15:40:57.713150 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2025-06-03 15:40:57.713157 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2025-06-03 15:40:57.713163 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2025-06-03 15:40:57.713170 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2025-06-03 15:40:57.713176 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2025-06-03 15:40:57.713183 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2025-06-03 15:40:57.713189 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2025-06-03 15:40:57.713196 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2025-06-03 15:40:57.713202 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2025-06-03 15:40:57.713209 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2025-06-03 15:40:57.713215 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2025-06-03 15:40:57.713222 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2025-06-03 15:40:57.713229 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2025-06-03 15:40:57.713241 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2025-06-03 15:40:57.713248 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2025-06-03 15:40:57.713255 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2025-06-03 15:40:57.713261 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2025-06-03 15:40:57.713268 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2025-06-03 15:40:57.713275 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2025-06-03 15:40:57.713281 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2025-06-03 15:40:57.713288 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2025-06-03 15:40:57.713294 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2025-06-03 15:40:57.713301 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2025-06-03 15:40:57.713307 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2025-06-03 15:40:57.713314 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2025-06-03 15:40:57.713325 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2025-06-03 15:40:57.713332 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2025-06-03 15:40:57.713339 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2025-06-03 15:40:57.713346 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2025-06-03 15:40:57.713353 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-06-03 15:40:57.713359 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-06-03 15:40:57.713366 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-06-03 15:40:57.713377 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2025-06-03 15:40:57.713384 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2025-06-03 15:40:57.713391 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-06-03 15:40:57.713397 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2025-06-03 15:40:57.713404 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2025-06-03 15:40:57.713411 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2025-06-03 15:40:57.713418 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-06-03 15:40:57.713424 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2025-06-03 15:40:57.713431 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-06-03 15:40:57.713438 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2025-06-03 15:40:57.713445 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2025-06-03 15:40:57.713451 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2025-06-03 15:40:57.713458 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2025-06-03 15:40:57.713464 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2025-06-03 15:40:57.713471 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2025-06-03 15:40:57.713478 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2025-06-03 15:40:57.713484 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2025-06-03 15:40:57.713491 | orchestrator | 2025-06-03 15:40:57.713497 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2025-06-03 15:40:57.713504 | orchestrator | Tuesday 03 June 2025 15:32:47 +0000 (0:00:06.607) 0:03:06.765 ********** 2025-06-03 15:40:57.713511 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:57.713517 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:57.713544 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:57.713552 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-03 15:40:57.713576 | orchestrator | 2025-06-03 15:40:57.713583 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2025-06-03 15:40:57.713589 | orchestrator | Tuesday 03 June 2025 15:32:48 +0000 (0:00:00.863) 0:03:07.628 ********** 2025-06-03 15:40:57.713596 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-06-03 15:40:57.713604 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-06-03 15:40:57.713610 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-06-03 15:40:57.713617 | orchestrator | 2025-06-03 15:40:57.713624 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2025-06-03 15:40:57.713631 | orchestrator | Tuesday 03 June 2025 15:32:49 +0000 (0:00:00.688) 0:03:08.317 ********** 2025-06-03 15:40:57.713637 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-06-03 15:40:57.713644 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-06-03 15:40:57.713651 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-06-03 15:40:57.713658 | orchestrator | 2025-06-03 15:40:57.713664 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2025-06-03 15:40:57.713671 | orchestrator | Tuesday 03 June 2025 15:32:50 +0000 (0:00:01.483) 0:03:09.800 ********** 2025-06-03 15:40:57.713678 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:57.713684 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:57.713691 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:57.713698 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:57.713704 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:57.713711 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:57.713718 | orchestrator | 2025-06-03 15:40:57.713724 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2025-06-03 15:40:57.713731 | orchestrator | Tuesday 03 June 2025 15:32:51 +0000 (0:00:00.563) 0:03:10.363 ********** 2025-06-03 15:40:57.713738 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:57.713744 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:57.713751 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:57.713757 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:57.713764 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:57.713771 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:57.713777 | orchestrator | 2025-06-03 15:40:57.713784 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2025-06-03 15:40:57.713795 | orchestrator | Tuesday 03 June 2025 15:32:52 +0000 (0:00:00.712) 0:03:11.076 ********** 2025-06-03 15:40:57.713802 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:57.713809 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:57.713815 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:57.713822 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.713829 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:57.713835 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:57.713842 | orchestrator | 2025-06-03 15:40:57.713849 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2025-06-03 15:40:57.713855 | orchestrator | Tuesday 03 June 2025 15:32:52 +0000 (0:00:00.580) 0:03:11.657 ********** 2025-06-03 15:40:57.713862 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:57.713869 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:57.713880 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:57.713887 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.713898 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:57.713905 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:57.713912 | orchestrator | 2025-06-03 15:40:57.713918 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2025-06-03 15:40:57.713925 | orchestrator | Tuesday 03 June 2025 15:32:53 +0000 (0:00:00.701) 0:03:12.358 ********** 2025-06-03 15:40:57.713933 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:57.713944 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:57.713955 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:57.713966 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.713978 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:57.713989 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:57.714001 | orchestrator | 2025-06-03 15:40:57.714012 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-06-03 15:40:57.714135 | orchestrator | Tuesday 03 June 2025 15:32:53 +0000 (0:00:00.596) 0:03:12.955 ********** 2025-06-03 15:40:57.714143 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:57.714150 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:57.714157 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:57.714163 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.714170 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:57.714176 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:57.714183 | orchestrator | 2025-06-03 15:40:57.714190 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-06-03 15:40:57.714197 | orchestrator | Tuesday 03 June 2025 15:32:54 +0000 (0:00:00.683) 0:03:13.638 ********** 2025-06-03 15:40:57.714204 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:57.714210 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:57.714217 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:57.714223 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.714230 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:57.714236 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:57.714243 | orchestrator | 2025-06-03 15:40:57.714250 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-06-03 15:40:57.714262 | orchestrator | Tuesday 03 June 2025 15:32:55 +0000 (0:00:00.687) 0:03:14.325 ********** 2025-06-03 15:40:57.714273 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:57.714283 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:57.714294 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:57.714303 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.714312 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:57.714323 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:57.714333 | orchestrator | 2025-06-03 15:40:57.714344 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-06-03 15:40:57.714356 | orchestrator | Tuesday 03 June 2025 15:32:56 +0000 (0:00:00.747) 0:03:15.073 ********** 2025-06-03 15:40:57.714367 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:57.714378 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:57.714390 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:57.714401 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:57.714411 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:57.714422 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:57.714429 | orchestrator | 2025-06-03 15:40:57.714436 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2025-06-03 15:40:57.714444 | orchestrator | Tuesday 03 June 2025 15:33:01 +0000 (0:00:04.998) 0:03:20.071 ********** 2025-06-03 15:40:57.714450 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:57.714457 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:57.714464 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:57.714470 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:57.714477 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:57.714484 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:57.714500 | orchestrator | 2025-06-03 15:40:57.714507 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2025-06-03 15:40:57.714514 | orchestrator | Tuesday 03 June 2025 15:33:01 +0000 (0:00:00.805) 0:03:20.877 ********** 2025-06-03 15:40:57.714521 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:57.714552 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:57.714561 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:57.714568 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:57.714575 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:57.714582 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:57.714589 | orchestrator | 2025-06-03 15:40:57.714595 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2025-06-03 15:40:57.714602 | orchestrator | Tuesday 03 June 2025 15:33:02 +0000 (0:00:00.919) 0:03:21.796 ********** 2025-06-03 15:40:57.714609 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:57.714616 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:57.714623 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:57.714629 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.714636 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:57.714643 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:57.714649 | orchestrator | 2025-06-03 15:40:57.714656 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2025-06-03 15:40:57.714663 | orchestrator | Tuesday 03 June 2025 15:33:03 +0000 (0:00:00.745) 0:03:22.542 ********** 2025-06-03 15:40:57.714669 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:57.714676 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:57.714683 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:57.714695 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-06-03 15:40:57.714702 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-06-03 15:40:57.714710 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-06-03 15:40:57.714716 | orchestrator | 2025-06-03 15:40:57.714723 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2025-06-03 15:40:57.714774 | orchestrator | Tuesday 03 June 2025 15:33:04 +0000 (0:00:00.569) 0:03:23.112 ********** 2025-06-03 15:40:57.714788 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:57.714799 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:57.714809 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:57.714823 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2025-06-03 15:40:57.714838 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2025-06-03 15:40:57.714852 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.714864 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2025-06-03 15:40:57.714876 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2025-06-03 15:40:57.714897 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:57.714904 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2025-06-03 15:40:57.714911 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2025-06-03 15:40:57.714918 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:57.714924 | orchestrator | 2025-06-03 15:40:57.714932 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2025-06-03 15:40:57.714944 | orchestrator | Tuesday 03 June 2025 15:33:05 +0000 (0:00:00.903) 0:03:24.015 ********** 2025-06-03 15:40:57.714955 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:57.714966 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:57.714977 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:57.714988 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.714999 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:57.715010 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:57.715022 | orchestrator | 2025-06-03 15:40:57.715029 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2025-06-03 15:40:57.715036 | orchestrator | Tuesday 03 June 2025 15:33:05 +0000 (0:00:00.528) 0:03:24.544 ********** 2025-06-03 15:40:57.715043 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:57.715049 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:57.715056 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:57.715062 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.715069 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:57.715075 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:57.715082 | orchestrator | 2025-06-03 15:40:57.715089 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-06-03 15:40:57.715096 | orchestrator | Tuesday 03 June 2025 15:33:06 +0000 (0:00:00.670) 0:03:25.214 ********** 2025-06-03 15:40:57.715102 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:57.715109 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:57.715115 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:57.715122 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.715129 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:57.715135 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:57.715142 | orchestrator | 2025-06-03 15:40:57.715148 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-06-03 15:40:57.715155 | orchestrator | Tuesday 03 June 2025 15:33:06 +0000 (0:00:00.702) 0:03:25.917 ********** 2025-06-03 15:40:57.715166 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:57.715173 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:57.715180 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:57.715186 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.715193 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:57.715199 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:57.715206 | orchestrator | 2025-06-03 15:40:57.715213 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-06-03 15:40:57.715219 | orchestrator | Tuesday 03 June 2025 15:33:07 +0000 (0:00:00.675) 0:03:26.593 ********** 2025-06-03 15:40:57.715226 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:57.715233 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:57.715239 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:57.715274 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.715282 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:57.715294 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:57.715301 | orchestrator | 2025-06-03 15:40:57.715308 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-06-03 15:40:57.715314 | orchestrator | Tuesday 03 June 2025 15:33:08 +0000 (0:00:00.548) 0:03:27.141 ********** 2025-06-03 15:40:57.715321 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:57.715328 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:57.715334 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:57.715341 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:57.715348 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:57.715354 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:57.715361 | orchestrator | 2025-06-03 15:40:57.715367 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-06-03 15:40:57.715374 | orchestrator | Tuesday 03 June 2025 15:33:08 +0000 (0:00:00.797) 0:03:27.939 ********** 2025-06-03 15:40:57.715381 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-06-03 15:40:57.715387 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-06-03 15:40:57.715394 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-06-03 15:40:57.715400 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:57.715407 | orchestrator | 2025-06-03 15:40:57.715413 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-06-03 15:40:57.715420 | orchestrator | Tuesday 03 June 2025 15:33:09 +0000 (0:00:00.356) 0:03:28.295 ********** 2025-06-03 15:40:57.715427 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-06-03 15:40:57.715433 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-06-03 15:40:57.715440 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-06-03 15:40:57.715447 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:57.715453 | orchestrator | 2025-06-03 15:40:57.715460 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-06-03 15:40:57.715466 | orchestrator | Tuesday 03 June 2025 15:33:09 +0000 (0:00:00.342) 0:03:28.637 ********** 2025-06-03 15:40:57.715473 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-06-03 15:40:57.715479 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-06-03 15:40:57.715486 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-06-03 15:40:57.715493 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:57.715499 | orchestrator | 2025-06-03 15:40:57.715506 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-06-03 15:40:57.715512 | orchestrator | Tuesday 03 June 2025 15:33:10 +0000 (0:00:00.456) 0:03:29.094 ********** 2025-06-03 15:40:57.715519 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:57.715571 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:57.715578 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:57.715585 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:57.715592 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:57.715598 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:57.715605 | orchestrator | 2025-06-03 15:40:57.715612 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-06-03 15:40:57.715618 | orchestrator | Tuesday 03 June 2025 15:33:10 +0000 (0:00:00.753) 0:03:29.847 ********** 2025-06-03 15:40:57.715625 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-06-03 15:40:57.715632 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:57.715639 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-06-03 15:40:57.715645 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:57.715652 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-06-03 15:40:57.715659 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:57.715665 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-06-03 15:40:57.715672 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-06-03 15:40:57.715679 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-06-03 15:40:57.715685 | orchestrator | 2025-06-03 15:40:57.715702 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2025-06-03 15:40:57.715709 | orchestrator | Tuesday 03 June 2025 15:33:12 +0000 (0:00:01.768) 0:03:31.615 ********** 2025-06-03 15:40:57.715715 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:40:57.715722 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:40:57.715728 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:40:57.715735 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:40:57.715741 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:40:57.715748 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:40:57.715754 | orchestrator | 2025-06-03 15:40:57.715761 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-06-03 15:40:57.715768 | orchestrator | Tuesday 03 June 2025 15:33:14 +0000 (0:00:02.393) 0:03:34.009 ********** 2025-06-03 15:40:57.715775 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:40:57.715781 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:40:57.715788 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:40:57.715794 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:40:57.715801 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:40:57.715807 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:40:57.715813 | orchestrator | 2025-06-03 15:40:57.715819 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-06-03 15:40:57.715825 | orchestrator | Tuesday 03 June 2025 15:33:15 +0000 (0:00:01.001) 0:03:35.010 ********** 2025-06-03 15:40:57.715832 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.715838 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:57.715848 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:57.715855 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:40:57.715861 | orchestrator | 2025-06-03 15:40:57.715871 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-06-03 15:40:57.715881 | orchestrator | Tuesday 03 June 2025 15:33:17 +0000 (0:00:01.010) 0:03:36.021 ********** 2025-06-03 15:40:57.715891 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:40:57.715901 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:40:57.715911 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:40:57.715921 | orchestrator | 2025-06-03 15:40:57.715932 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-06-03 15:40:57.715973 | orchestrator | Tuesday 03 June 2025 15:33:17 +0000 (0:00:00.370) 0:03:36.391 ********** 2025-06-03 15:40:57.715985 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:40:57.715995 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:40:57.716006 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:40:57.716016 | orchestrator | 2025-06-03 15:40:57.716026 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-06-03 15:40:57.716038 | orchestrator | Tuesday 03 June 2025 15:33:18 +0000 (0:00:01.584) 0:03:37.975 ********** 2025-06-03 15:40:57.716044 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-06-03 15:40:57.716050 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-06-03 15:40:57.716056 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-06-03 15:40:57.716063 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:57.716069 | orchestrator | 2025-06-03 15:40:57.716075 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-06-03 15:40:57.716081 | orchestrator | Tuesday 03 June 2025 15:33:19 +0000 (0:00:00.794) 0:03:38.769 ********** 2025-06-03 15:40:57.716087 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:40:57.716094 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:40:57.716100 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:40:57.716106 | orchestrator | 2025-06-03 15:40:57.716112 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-06-03 15:40:57.716118 | orchestrator | Tuesday 03 June 2025 15:33:20 +0000 (0:00:00.353) 0:03:39.123 ********** 2025-06-03 15:40:57.716124 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:57.716136 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:57.716142 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:57.716149 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-03 15:40:57.716155 | orchestrator | 2025-06-03 15:40:57.716161 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-06-03 15:40:57.716167 | orchestrator | Tuesday 03 June 2025 15:33:21 +0000 (0:00:01.093) 0:03:40.217 ********** 2025-06-03 15:40:57.716173 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-03 15:40:57.716179 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-03 15:40:57.716185 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-03 15:40:57.716192 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.716198 | orchestrator | 2025-06-03 15:40:57.716204 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-06-03 15:40:57.716210 | orchestrator | Tuesday 03 June 2025 15:33:21 +0000 (0:00:00.457) 0:03:40.674 ********** 2025-06-03 15:40:57.716216 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.716222 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:57.716229 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:57.716235 | orchestrator | 2025-06-03 15:40:57.716241 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-06-03 15:40:57.716247 | orchestrator | Tuesday 03 June 2025 15:33:22 +0000 (0:00:00.392) 0:03:41.067 ********** 2025-06-03 15:40:57.716254 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.716265 | orchestrator | 2025-06-03 15:40:57.716275 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-06-03 15:40:57.716285 | orchestrator | Tuesday 03 June 2025 15:33:22 +0000 (0:00:00.253) 0:03:41.320 ********** 2025-06-03 15:40:57.716294 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.716304 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:57.716314 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:57.716324 | orchestrator | 2025-06-03 15:40:57.716334 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-06-03 15:40:57.716345 | orchestrator | Tuesday 03 June 2025 15:33:22 +0000 (0:00:00.356) 0:03:41.677 ********** 2025-06-03 15:40:57.716355 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.716365 | orchestrator | 2025-06-03 15:40:57.716372 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-06-03 15:40:57.716378 | orchestrator | Tuesday 03 June 2025 15:33:22 +0000 (0:00:00.234) 0:03:41.911 ********** 2025-06-03 15:40:57.716384 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.716390 | orchestrator | 2025-06-03 15:40:57.716396 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-06-03 15:40:57.716402 | orchestrator | Tuesday 03 June 2025 15:33:23 +0000 (0:00:00.269) 0:03:42.181 ********** 2025-06-03 15:40:57.716408 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.716414 | orchestrator | 2025-06-03 15:40:57.716420 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-06-03 15:40:57.716426 | orchestrator | Tuesday 03 June 2025 15:33:23 +0000 (0:00:00.433) 0:03:42.615 ********** 2025-06-03 15:40:57.716432 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.716439 | orchestrator | 2025-06-03 15:40:57.716445 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-06-03 15:40:57.716451 | orchestrator | Tuesday 03 June 2025 15:33:23 +0000 (0:00:00.230) 0:03:42.845 ********** 2025-06-03 15:40:57.716457 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.716463 | orchestrator | 2025-06-03 15:40:57.716469 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-06-03 15:40:57.716475 | orchestrator | Tuesday 03 June 2025 15:33:24 +0000 (0:00:00.247) 0:03:43.093 ********** 2025-06-03 15:40:57.716486 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-03 15:40:57.716493 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-03 15:40:57.716505 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-03 15:40:57.716511 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.716518 | orchestrator | 2025-06-03 15:40:57.716541 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-06-03 15:40:57.716548 | orchestrator | Tuesday 03 June 2025 15:33:24 +0000 (0:00:00.404) 0:03:43.498 ********** 2025-06-03 15:40:57.716554 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.716560 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:57.716566 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:57.716572 | orchestrator | 2025-06-03 15:40:57.716604 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-06-03 15:40:57.716611 | orchestrator | Tuesday 03 June 2025 15:33:24 +0000 (0:00:00.354) 0:03:43.852 ********** 2025-06-03 15:40:57.716617 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.716623 | orchestrator | 2025-06-03 15:40:57.716629 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-06-03 15:40:57.716636 | orchestrator | Tuesday 03 June 2025 15:33:25 +0000 (0:00:00.217) 0:03:44.069 ********** 2025-06-03 15:40:57.716642 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.716648 | orchestrator | 2025-06-03 15:40:57.716654 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-06-03 15:40:57.716660 | orchestrator | Tuesday 03 June 2025 15:33:25 +0000 (0:00:00.240) 0:03:44.310 ********** 2025-06-03 15:40:57.716666 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:57.716673 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:57.716679 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:57.716685 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-03 15:40:57.716691 | orchestrator | 2025-06-03 15:40:57.716698 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-06-03 15:40:57.716704 | orchestrator | Tuesday 03 June 2025 15:33:26 +0000 (0:00:01.096) 0:03:45.407 ********** 2025-06-03 15:40:57.716710 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:57.716716 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:57.716722 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:57.716728 | orchestrator | 2025-06-03 15:40:57.716735 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-06-03 15:40:57.716741 | orchestrator | Tuesday 03 June 2025 15:33:26 +0000 (0:00:00.337) 0:03:45.744 ********** 2025-06-03 15:40:57.716747 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:40:57.716753 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:40:57.716759 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:40:57.716766 | orchestrator | 2025-06-03 15:40:57.716772 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-06-03 15:40:57.716778 | orchestrator | Tuesday 03 June 2025 15:33:27 +0000 (0:00:01.129) 0:03:46.874 ********** 2025-06-03 15:40:57.716784 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-03 15:40:57.716790 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-03 15:40:57.716797 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-03 15:40:57.716803 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.716809 | orchestrator | 2025-06-03 15:40:57.716815 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-06-03 15:40:57.716821 | orchestrator | Tuesday 03 June 2025 15:33:28 +0000 (0:00:00.857) 0:03:47.731 ********** 2025-06-03 15:40:57.716827 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:57.716834 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:57.716840 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:57.716846 | orchestrator | 2025-06-03 15:40:57.716852 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-06-03 15:40:57.716858 | orchestrator | Tuesday 03 June 2025 15:33:29 +0000 (0:00:00.304) 0:03:48.036 ********** 2025-06-03 15:40:57.716864 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:57.716876 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:57.716882 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:57.716888 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-03 15:40:57.716895 | orchestrator | 2025-06-03 15:40:57.716901 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-06-03 15:40:57.716907 | orchestrator | Tuesday 03 June 2025 15:33:29 +0000 (0:00:00.891) 0:03:48.927 ********** 2025-06-03 15:40:57.716913 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:57.716919 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:57.716925 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:57.716933 | orchestrator | 2025-06-03 15:40:57.716943 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-06-03 15:40:57.716954 | orchestrator | Tuesday 03 June 2025 15:33:30 +0000 (0:00:00.290) 0:03:49.218 ********** 2025-06-03 15:40:57.716963 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:40:57.716974 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:40:57.716985 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:40:57.716995 | orchestrator | 2025-06-03 15:40:57.717005 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-06-03 15:40:57.717014 | orchestrator | Tuesday 03 June 2025 15:33:31 +0000 (0:00:01.218) 0:03:50.437 ********** 2025-06-03 15:40:57.717023 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-03 15:40:57.717030 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-03 15:40:57.717036 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-03 15:40:57.717042 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.717048 | orchestrator | 2025-06-03 15:40:57.717055 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-06-03 15:40:57.717061 | orchestrator | Tuesday 03 June 2025 15:33:32 +0000 (0:00:00.680) 0:03:51.118 ********** 2025-06-03 15:40:57.717067 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:57.717073 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:57.717079 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:57.717086 | orchestrator | 2025-06-03 15:40:57.717096 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2025-06-03 15:40:57.717103 | orchestrator | Tuesday 03 June 2025 15:33:32 +0000 (0:00:00.274) 0:03:51.393 ********** 2025-06-03 15:40:57.717109 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:57.717115 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:57.717121 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:57.717127 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.717134 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:57.717140 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:57.717146 | orchestrator | 2025-06-03 15:40:57.717152 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-06-03 15:40:57.717158 | orchestrator | Tuesday 03 June 2025 15:33:33 +0000 (0:00:00.674) 0:03:52.068 ********** 2025-06-03 15:40:57.717188 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.717195 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:57.717201 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:57.717207 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:40:57.717213 | orchestrator | 2025-06-03 15:40:57.717219 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-06-03 15:40:57.717226 | orchestrator | Tuesday 03 June 2025 15:33:33 +0000 (0:00:00.812) 0:03:52.880 ********** 2025-06-03 15:40:57.717232 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:40:57.717238 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:40:57.717244 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:40:57.717250 | orchestrator | 2025-06-03 15:40:57.717256 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-06-03 15:40:57.717262 | orchestrator | Tuesday 03 June 2025 15:33:34 +0000 (0:00:00.279) 0:03:53.159 ********** 2025-06-03 15:40:57.717275 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:40:57.717281 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:40:57.717287 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:40:57.717293 | orchestrator | 2025-06-03 15:40:57.717299 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-06-03 15:40:57.717305 | orchestrator | Tuesday 03 June 2025 15:33:35 +0000 (0:00:01.151) 0:03:54.311 ********** 2025-06-03 15:40:57.717311 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-06-03 15:40:57.717317 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-06-03 15:40:57.717323 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-06-03 15:40:57.717330 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:57.717336 | orchestrator | 2025-06-03 15:40:57.717342 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-06-03 15:40:57.717348 | orchestrator | Tuesday 03 June 2025 15:33:36 +0000 (0:00:00.751) 0:03:55.062 ********** 2025-06-03 15:40:57.717354 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:40:57.717360 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:40:57.717366 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:40:57.717372 | orchestrator | 2025-06-03 15:40:57.717379 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2025-06-03 15:40:57.717385 | orchestrator | 2025-06-03 15:40:57.717391 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-06-03 15:40:57.717397 | orchestrator | Tuesday 03 June 2025 15:33:36 +0000 (0:00:00.640) 0:03:55.703 ********** 2025-06-03 15:40:57.717403 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:40:57.717410 | orchestrator | 2025-06-03 15:40:57.717416 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-06-03 15:40:57.717422 | orchestrator | Tuesday 03 June 2025 15:33:37 +0000 (0:00:00.361) 0:03:56.065 ********** 2025-06-03 15:40:57.717428 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:40:57.717434 | orchestrator | 2025-06-03 15:40:57.717440 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-06-03 15:40:57.717446 | orchestrator | Tuesday 03 June 2025 15:33:37 +0000 (0:00:00.538) 0:03:56.603 ********** 2025-06-03 15:40:57.717453 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:40:57.717459 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:40:57.717465 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:40:57.717471 | orchestrator | 2025-06-03 15:40:57.717477 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-06-03 15:40:57.717484 | orchestrator | Tuesday 03 June 2025 15:33:38 +0000 (0:00:00.615) 0:03:57.219 ********** 2025-06-03 15:40:57.717490 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:57.717496 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:57.717502 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:57.717508 | orchestrator | 2025-06-03 15:40:57.717514 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-06-03 15:40:57.717520 | orchestrator | Tuesday 03 June 2025 15:33:38 +0000 (0:00:00.250) 0:03:57.469 ********** 2025-06-03 15:40:57.717547 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:57.717553 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:57.717559 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:57.717566 | orchestrator | 2025-06-03 15:40:57.717572 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-06-03 15:40:57.717578 | orchestrator | Tuesday 03 June 2025 15:33:38 +0000 (0:00:00.254) 0:03:57.723 ********** 2025-06-03 15:40:57.717584 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:57.717590 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:57.717597 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:57.717603 | orchestrator | 2025-06-03 15:40:57.717609 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-06-03 15:40:57.717620 | orchestrator | Tuesday 03 June 2025 15:33:39 +0000 (0:00:00.435) 0:03:58.158 ********** 2025-06-03 15:40:57.717626 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:40:57.717632 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:40:57.717639 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:40:57.717645 | orchestrator | 2025-06-03 15:40:57.717651 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-06-03 15:40:57.717661 | orchestrator | Tuesday 03 June 2025 15:33:39 +0000 (0:00:00.697) 0:03:58.856 ********** 2025-06-03 15:40:57.717667 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:57.717674 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:57.717680 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:57.717686 | orchestrator | 2025-06-03 15:40:57.717692 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-06-03 15:40:57.717698 | orchestrator | Tuesday 03 June 2025 15:33:40 +0000 (0:00:00.297) 0:03:59.153 ********** 2025-06-03 15:40:57.717705 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:57.717711 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:57.717717 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:57.717723 | orchestrator | 2025-06-03 15:40:57.717730 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-06-03 15:40:57.717756 | orchestrator | Tuesday 03 June 2025 15:33:40 +0000 (0:00:00.257) 0:03:59.411 ********** 2025-06-03 15:40:57.717763 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:40:57.717769 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:40:57.717776 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:40:57.717782 | orchestrator | 2025-06-03 15:40:57.717788 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-06-03 15:40:57.717794 | orchestrator | Tuesday 03 June 2025 15:33:41 +0000 (0:00:00.879) 0:04:00.291 ********** 2025-06-03 15:40:57.717800 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:40:57.717806 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:40:57.717812 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:40:57.717818 | orchestrator | 2025-06-03 15:40:57.717825 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-06-03 15:40:57.717831 | orchestrator | Tuesday 03 June 2025 15:33:42 +0000 (0:00:00.728) 0:04:01.020 ********** 2025-06-03 15:40:57.717837 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:57.717843 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:57.717850 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:57.717856 | orchestrator | 2025-06-03 15:40:57.717862 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-06-03 15:40:57.717868 | orchestrator | Tuesday 03 June 2025 15:33:42 +0000 (0:00:00.277) 0:04:01.297 ********** 2025-06-03 15:40:57.717874 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:40:57.717880 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:40:57.717887 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:40:57.717893 | orchestrator | 2025-06-03 15:40:57.717899 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-06-03 15:40:57.717905 | orchestrator | Tuesday 03 June 2025 15:33:42 +0000 (0:00:00.310) 0:04:01.608 ********** 2025-06-03 15:40:57.717911 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:57.717917 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:57.717923 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:57.717930 | orchestrator | 2025-06-03 15:40:57.717941 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-06-03 15:40:57.717952 | orchestrator | Tuesday 03 June 2025 15:33:43 +0000 (0:00:00.452) 0:04:02.060 ********** 2025-06-03 15:40:57.717962 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:57.717972 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:57.717982 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:57.717992 | orchestrator | 2025-06-03 15:40:57.718003 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-06-03 15:40:57.718013 | orchestrator | Tuesday 03 June 2025 15:33:43 +0000 (0:00:00.261) 0:04:02.321 ********** 2025-06-03 15:40:57.718061 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:57.718074 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:57.718085 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:57.718096 | orchestrator | 2025-06-03 15:40:57.718106 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-06-03 15:40:57.718117 | orchestrator | Tuesday 03 June 2025 15:33:43 +0000 (0:00:00.268) 0:04:02.590 ********** 2025-06-03 15:40:57.718127 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:57.718137 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:57.718148 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:57.718159 | orchestrator | 2025-06-03 15:40:57.718170 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-06-03 15:40:57.718181 | orchestrator | Tuesday 03 June 2025 15:33:43 +0000 (0:00:00.253) 0:04:02.843 ********** 2025-06-03 15:40:57.718190 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:57.718201 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:57.718213 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:57.718223 | orchestrator | 2025-06-03 15:40:57.718234 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-06-03 15:40:57.718243 | orchestrator | Tuesday 03 June 2025 15:33:44 +0000 (0:00:00.428) 0:04:03.271 ********** 2025-06-03 15:40:57.718255 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:40:57.718265 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:40:57.718275 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:40:57.718285 | orchestrator | 2025-06-03 15:40:57.718296 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-06-03 15:40:57.718306 | orchestrator | Tuesday 03 June 2025 15:33:44 +0000 (0:00:00.275) 0:04:03.547 ********** 2025-06-03 15:40:57.718315 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:40:57.718326 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:40:57.718337 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:40:57.718349 | orchestrator | 2025-06-03 15:40:57.718360 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-06-03 15:40:57.718370 | orchestrator | Tuesday 03 June 2025 15:33:44 +0000 (0:00:00.356) 0:04:03.903 ********** 2025-06-03 15:40:57.718379 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:40:57.718389 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:40:57.718398 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:40:57.718408 | orchestrator | 2025-06-03 15:40:57.718419 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2025-06-03 15:40:57.718430 | orchestrator | Tuesday 03 June 2025 15:33:45 +0000 (0:00:00.643) 0:04:04.547 ********** 2025-06-03 15:40:57.718440 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:40:57.718450 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:40:57.718460 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:40:57.718471 | orchestrator | 2025-06-03 15:40:57.718481 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2025-06-03 15:40:57.718491 | orchestrator | Tuesday 03 June 2025 15:33:45 +0000 (0:00:00.274) 0:04:04.821 ********** 2025-06-03 15:40:57.718511 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:40:57.718544 | orchestrator | 2025-06-03 15:40:57.718556 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2025-06-03 15:40:57.718566 | orchestrator | Tuesday 03 June 2025 15:33:46 +0000 (0:00:00.514) 0:04:05.336 ********** 2025-06-03 15:40:57.718576 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:57.718587 | orchestrator | 2025-06-03 15:40:57.718597 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2025-06-03 15:40:57.718607 | orchestrator | Tuesday 03 June 2025 15:33:46 +0000 (0:00:00.151) 0:04:05.487 ********** 2025-06-03 15:40:57.718617 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-06-03 15:40:57.718627 | orchestrator | 2025-06-03 15:40:57.718692 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2025-06-03 15:40:57.718706 | orchestrator | Tuesday 03 June 2025 15:33:47 +0000 (0:00:01.259) 0:04:06.747 ********** 2025-06-03 15:40:57.718727 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:40:57.718738 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:40:57.718748 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:40:57.718758 | orchestrator | 2025-06-03 15:40:57.718768 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2025-06-03 15:40:57.718778 | orchestrator | Tuesday 03 June 2025 15:33:48 +0000 (0:00:00.309) 0:04:07.057 ********** 2025-06-03 15:40:57.718788 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:40:57.718797 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:40:57.718806 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:40:57.718816 | orchestrator | 2025-06-03 15:40:57.718826 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2025-06-03 15:40:57.718837 | orchestrator | Tuesday 03 June 2025 15:33:48 +0000 (0:00:00.365) 0:04:07.422 ********** 2025-06-03 15:40:57.718846 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:40:57.718855 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:40:57.718865 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:40:57.718876 | orchestrator | 2025-06-03 15:40:57.718886 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2025-06-03 15:40:57.718896 | orchestrator | Tuesday 03 June 2025 15:33:49 +0000 (0:00:01.298) 0:04:08.721 ********** 2025-06-03 15:40:57.718906 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:40:57.718916 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:40:57.718926 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:40:57.718935 | orchestrator | 2025-06-03 15:40:57.718945 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2025-06-03 15:40:57.718954 | orchestrator | Tuesday 03 June 2025 15:33:50 +0000 (0:00:01.081) 0:04:09.802 ********** 2025-06-03 15:40:57.718964 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:40:57.718975 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:40:57.718984 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:40:57.718995 | orchestrator | 2025-06-03 15:40:57.719005 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2025-06-03 15:40:57.719015 | orchestrator | Tuesday 03 June 2025 15:33:51 +0000 (0:00:00.757) 0:04:10.559 ********** 2025-06-03 15:40:57.719025 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:40:57.719036 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:40:57.719046 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:40:57.719056 | orchestrator | 2025-06-03 15:40:57.719066 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2025-06-03 15:40:57.719076 | orchestrator | Tuesday 03 June 2025 15:33:52 +0000 (0:00:00.762) 0:04:11.322 ********** 2025-06-03 15:40:57.719085 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:40:57.719095 | orchestrator | 2025-06-03 15:40:57.719106 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2025-06-03 15:40:57.719116 | orchestrator | Tuesday 03 June 2025 15:33:53 +0000 (0:00:01.363) 0:04:12.686 ********** 2025-06-03 15:40:57.719127 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:40:57.719138 | orchestrator | 2025-06-03 15:40:57.719149 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2025-06-03 15:40:57.719160 | orchestrator | Tuesday 03 June 2025 15:33:54 +0000 (0:00:00.654) 0:04:13.341 ********** 2025-06-03 15:40:57.719171 | orchestrator | changed: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-03 15:40:57.719182 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-06-03 15:40:57.719193 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-03 15:40:57.719204 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-03 15:40:57.719215 | orchestrator | ok: [testbed-node-1] => (item=None) 2025-06-03 15:40:57.719226 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-03 15:40:57.719237 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-03 15:40:57.719247 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2025-06-03 15:40:57.719267 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-03 15:40:57.719278 | orchestrator | changed: [testbed-node-1 -> {{ item }}] 2025-06-03 15:40:57.719287 | orchestrator | ok: [testbed-node-2] => (item=None) 2025-06-03 15:40:57.719297 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2025-06-03 15:40:57.719306 | orchestrator | 2025-06-03 15:40:57.719316 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2025-06-03 15:40:57.719325 | orchestrator | Tuesday 03 June 2025 15:33:57 +0000 (0:00:03.500) 0:04:16.841 ********** 2025-06-03 15:40:57.719335 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:40:57.719344 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:40:57.719353 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:40:57.719362 | orchestrator | 2025-06-03 15:40:57.719372 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2025-06-03 15:40:57.719381 | orchestrator | Tuesday 03 June 2025 15:33:59 +0000 (0:00:01.475) 0:04:18.317 ********** 2025-06-03 15:40:57.719391 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:40:57.719400 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:40:57.719410 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:40:57.719419 | orchestrator | 2025-06-03 15:40:57.719428 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2025-06-03 15:40:57.719445 | orchestrator | Tuesday 03 June 2025 15:33:59 +0000 (0:00:00.366) 0:04:18.684 ********** 2025-06-03 15:40:57.719455 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:40:57.719466 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:40:57.719476 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:40:57.719485 | orchestrator | 2025-06-03 15:40:57.719495 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2025-06-03 15:40:57.719504 | orchestrator | Tuesday 03 June 2025 15:34:00 +0000 (0:00:00.396) 0:04:19.080 ********** 2025-06-03 15:40:57.719513 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:40:57.719582 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:40:57.719598 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:40:57.719608 | orchestrator | 2025-06-03 15:40:57.719618 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2025-06-03 15:40:57.719680 | orchestrator | Tuesday 03 June 2025 15:34:01 +0000 (0:00:01.810) 0:04:20.890 ********** 2025-06-03 15:40:57.719695 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:40:57.719706 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:40:57.719718 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:40:57.719729 | orchestrator | 2025-06-03 15:40:57.719741 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2025-06-03 15:40:57.719752 | orchestrator | Tuesday 03 June 2025 15:34:04 +0000 (0:00:02.239) 0:04:23.129 ********** 2025-06-03 15:40:57.719764 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:57.719775 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:57.719786 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:57.719798 | orchestrator | 2025-06-03 15:40:57.719809 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2025-06-03 15:40:57.719820 | orchestrator | Tuesday 03 June 2025 15:34:04 +0000 (0:00:00.425) 0:04:23.554 ********** 2025-06-03 15:40:57.719831 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:40:57.719843 | orchestrator | 2025-06-03 15:40:57.719854 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2025-06-03 15:40:57.719866 | orchestrator | Tuesday 03 June 2025 15:34:05 +0000 (0:00:00.710) 0:04:24.265 ********** 2025-06-03 15:40:57.719877 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:57.719889 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:57.719900 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:57.719911 | orchestrator | 2025-06-03 15:40:57.719922 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2025-06-03 15:40:57.719933 | orchestrator | Tuesday 03 June 2025 15:34:05 +0000 (0:00:00.745) 0:04:25.011 ********** 2025-06-03 15:40:57.719960 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:57.719971 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:57.719981 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:57.719991 | orchestrator | 2025-06-03 15:40:57.720000 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2025-06-03 15:40:57.720009 | orchestrator | Tuesday 03 June 2025 15:34:06 +0000 (0:00:00.465) 0:04:25.476 ********** 2025-06-03 15:40:57.720019 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:40:57.720030 | orchestrator | 2025-06-03 15:40:57.720040 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2025-06-03 15:40:57.720050 | orchestrator | Tuesday 03 June 2025 15:34:07 +0000 (0:00:00.612) 0:04:26.089 ********** 2025-06-03 15:40:57.720060 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:40:57.720070 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:40:57.720080 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:40:57.720090 | orchestrator | 2025-06-03 15:40:57.720099 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2025-06-03 15:40:57.720110 | orchestrator | Tuesday 03 June 2025 15:34:09 +0000 (0:00:02.297) 0:04:28.387 ********** 2025-06-03 15:40:57.720120 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:40:57.720130 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:40:57.720140 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:40:57.720150 | orchestrator | 2025-06-03 15:40:57.720160 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2025-06-03 15:40:57.720170 | orchestrator | Tuesday 03 June 2025 15:34:10 +0000 (0:00:01.281) 0:04:29.668 ********** 2025-06-03 15:40:57.720180 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:40:57.720190 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:40:57.720200 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:40:57.720210 | orchestrator | 2025-06-03 15:40:57.720220 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2025-06-03 15:40:57.720230 | orchestrator | Tuesday 03 June 2025 15:34:12 +0000 (0:00:01.923) 0:04:31.591 ********** 2025-06-03 15:40:57.720239 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:40:57.720249 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:40:57.720259 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:40:57.720269 | orchestrator | 2025-06-03 15:40:57.720279 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2025-06-03 15:40:57.720289 | orchestrator | Tuesday 03 June 2025 15:34:14 +0000 (0:00:02.091) 0:04:33.683 ********** 2025-06-03 15:40:57.720299 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:40:57.720308 | orchestrator | 2025-06-03 15:40:57.720318 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2025-06-03 15:40:57.720327 | orchestrator | Tuesday 03 June 2025 15:34:15 +0000 (0:00:00.756) 0:04:34.439 ********** 2025-06-03 15:40:57.720336 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:40:57.720346 | orchestrator | 2025-06-03 15:40:57.720355 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2025-06-03 15:40:57.720365 | orchestrator | Tuesday 03 June 2025 15:34:16 +0000 (0:00:01.259) 0:04:35.699 ********** 2025-06-03 15:40:57.720374 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:40:57.720383 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:40:57.720392 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:40:57.720402 | orchestrator | 2025-06-03 15:40:57.720412 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2025-06-03 15:40:57.720421 | orchestrator | Tuesday 03 June 2025 15:34:25 +0000 (0:00:09.053) 0:04:44.752 ********** 2025-06-03 15:40:57.720431 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:57.720448 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:57.720457 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:57.720467 | orchestrator | 2025-06-03 15:40:57.720476 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2025-06-03 15:40:57.720493 | orchestrator | Tuesday 03 June 2025 15:34:26 +0000 (0:00:00.329) 0:04:45.082 ********** 2025-06-03 15:40:57.720566 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__7d975e426233d6e8b9a3d09f27ca9ff76a28d6be'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2025-06-03 15:40:57.720580 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__7d975e426233d6e8b9a3d09f27ca9ff76a28d6be'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2025-06-03 15:40:57.720591 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__7d975e426233d6e8b9a3d09f27ca9ff76a28d6be'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2025-06-03 15:40:57.720602 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__7d975e426233d6e8b9a3d09f27ca9ff76a28d6be'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2025-06-03 15:40:57.720612 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__7d975e426233d6e8b9a3d09f27ca9ff76a28d6be'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2025-06-03 15:40:57.720623 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__7d975e426233d6e8b9a3d09f27ca9ff76a28d6be'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__7d975e426233d6e8b9a3d09f27ca9ff76a28d6be'}])  2025-06-03 15:40:57.720634 | orchestrator | 2025-06-03 15:40:57.720643 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-06-03 15:40:57.720653 | orchestrator | Tuesday 03 June 2025 15:34:41 +0000 (0:00:15.724) 0:05:00.806 ********** 2025-06-03 15:40:57.720662 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:57.720672 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:57.720681 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:57.720690 | orchestrator | 2025-06-03 15:40:57.720700 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-06-03 15:40:57.720709 | orchestrator | Tuesday 03 June 2025 15:34:42 +0000 (0:00:00.351) 0:05:01.158 ********** 2025-06-03 15:40:57.720718 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:40:57.720727 | orchestrator | 2025-06-03 15:40:57.720736 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-06-03 15:40:57.720746 | orchestrator | Tuesday 03 June 2025 15:34:43 +0000 (0:00:00.939) 0:05:02.097 ********** 2025-06-03 15:40:57.720755 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:40:57.720764 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:40:57.720773 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:40:57.720782 | orchestrator | 2025-06-03 15:40:57.720791 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-06-03 15:40:57.720809 | orchestrator | Tuesday 03 June 2025 15:34:43 +0000 (0:00:00.425) 0:05:02.522 ********** 2025-06-03 15:40:57.720818 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:57.720828 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:57.720837 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:57.720847 | orchestrator | 2025-06-03 15:40:57.720857 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-06-03 15:40:57.720867 | orchestrator | Tuesday 03 June 2025 15:34:43 +0000 (0:00:00.370) 0:05:02.893 ********** 2025-06-03 15:40:57.720877 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-06-03 15:40:57.720887 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-06-03 15:40:57.720897 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-06-03 15:40:57.720908 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:57.720917 | orchestrator | 2025-06-03 15:40:57.720934 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-06-03 15:40:57.720943 | orchestrator | Tuesday 03 June 2025 15:34:44 +0000 (0:00:01.100) 0:05:03.993 ********** 2025-06-03 15:40:57.720951 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:40:57.720960 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:40:57.720970 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:40:57.720982 | orchestrator | 2025-06-03 15:40:57.720991 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2025-06-03 15:40:57.721001 | orchestrator | 2025-06-03 15:40:57.721011 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-06-03 15:40:57.721019 | orchestrator | Tuesday 03 June 2025 15:34:45 +0000 (0:00:00.962) 0:05:04.956 ********** 2025-06-03 15:40:57.721068 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:40:57.721079 | orchestrator | 2025-06-03 15:40:57.721088 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-06-03 15:40:57.721096 | orchestrator | Tuesday 03 June 2025 15:34:46 +0000 (0:00:00.545) 0:05:05.501 ********** 2025-06-03 15:40:57.721106 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:40:57.721115 | orchestrator | 2025-06-03 15:40:57.721124 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-06-03 15:40:57.721133 | orchestrator | Tuesday 03 June 2025 15:34:47 +0000 (0:00:00.777) 0:05:06.278 ********** 2025-06-03 15:40:57.721142 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:40:57.721152 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:40:57.721160 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:40:57.721169 | orchestrator | 2025-06-03 15:40:57.721178 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-06-03 15:40:57.721186 | orchestrator | Tuesday 03 June 2025 15:34:48 +0000 (0:00:00.780) 0:05:07.059 ********** 2025-06-03 15:40:57.721195 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:57.721204 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:57.721213 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:57.721222 | orchestrator | 2025-06-03 15:40:57.721231 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-06-03 15:40:57.721240 | orchestrator | Tuesday 03 June 2025 15:34:48 +0000 (0:00:00.362) 0:05:07.422 ********** 2025-06-03 15:40:57.721248 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:57.721257 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:57.721266 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:57.721275 | orchestrator | 2025-06-03 15:40:57.721284 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-06-03 15:40:57.721293 | orchestrator | Tuesday 03 June 2025 15:34:48 +0000 (0:00:00.540) 0:05:07.962 ********** 2025-06-03 15:40:57.721302 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:57.721311 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:57.721320 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:57.721337 | orchestrator | 2025-06-03 15:40:57.721346 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-06-03 15:40:57.721355 | orchestrator | Tuesday 03 June 2025 15:34:49 +0000 (0:00:00.331) 0:05:08.294 ********** 2025-06-03 15:40:57.721363 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:40:57.721372 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:40:57.721381 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:40:57.721391 | orchestrator | 2025-06-03 15:40:57.721400 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-06-03 15:40:57.721409 | orchestrator | Tuesday 03 June 2025 15:34:49 +0000 (0:00:00.704) 0:05:08.998 ********** 2025-06-03 15:40:57.721418 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:57.721427 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:57.721437 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:57.721446 | orchestrator | 2025-06-03 15:40:57.721454 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-06-03 15:40:57.721463 | orchestrator | Tuesday 03 June 2025 15:34:50 +0000 (0:00:00.326) 0:05:09.325 ********** 2025-06-03 15:40:57.721471 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:57.721480 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:57.721489 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:57.721499 | orchestrator | 2025-06-03 15:40:57.721508 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-06-03 15:40:57.721518 | orchestrator | Tuesday 03 June 2025 15:34:50 +0000 (0:00:00.591) 0:05:09.917 ********** 2025-06-03 15:40:57.721583 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:40:57.721593 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:40:57.721601 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:40:57.721610 | orchestrator | 2025-06-03 15:40:57.721619 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-06-03 15:40:57.721627 | orchestrator | Tuesday 03 June 2025 15:34:51 +0000 (0:00:00.707) 0:05:10.624 ********** 2025-06-03 15:40:57.721637 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:40:57.721646 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:40:57.721656 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:40:57.721665 | orchestrator | 2025-06-03 15:40:57.721675 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-06-03 15:40:57.721684 | orchestrator | Tuesday 03 June 2025 15:34:52 +0000 (0:00:00.742) 0:05:11.367 ********** 2025-06-03 15:40:57.721694 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:57.721703 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:57.721712 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:57.721722 | orchestrator | 2025-06-03 15:40:57.721731 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-06-03 15:40:57.721741 | orchestrator | Tuesday 03 June 2025 15:34:52 +0000 (0:00:00.293) 0:05:11.661 ********** 2025-06-03 15:40:57.721749 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:40:57.721757 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:40:57.721766 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:40:57.721775 | orchestrator | 2025-06-03 15:40:57.721783 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-06-03 15:40:57.721792 | orchestrator | Tuesday 03 June 2025 15:34:53 +0000 (0:00:00.610) 0:05:12.271 ********** 2025-06-03 15:40:57.721801 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:57.721817 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:57.721826 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:57.721834 | orchestrator | 2025-06-03 15:40:57.721843 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-06-03 15:40:57.721851 | orchestrator | Tuesday 03 June 2025 15:34:53 +0000 (0:00:00.317) 0:05:12.589 ********** 2025-06-03 15:40:57.721859 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:57.721866 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:57.721873 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:57.721880 | orchestrator | 2025-06-03 15:40:57.721888 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-06-03 15:40:57.721905 | orchestrator | Tuesday 03 June 2025 15:34:53 +0000 (0:00:00.385) 0:05:12.974 ********** 2025-06-03 15:40:57.721956 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:57.721966 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:57.721974 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:57.721982 | orchestrator | 2025-06-03 15:40:57.721990 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-06-03 15:40:57.721998 | orchestrator | Tuesday 03 June 2025 15:34:54 +0000 (0:00:00.317) 0:05:13.291 ********** 2025-06-03 15:40:57.722006 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:57.722014 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:57.722048 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:57.722053 | orchestrator | 2025-06-03 15:40:57.722058 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-06-03 15:40:57.722062 | orchestrator | Tuesday 03 June 2025 15:34:54 +0000 (0:00:00.566) 0:05:13.858 ********** 2025-06-03 15:40:57.722067 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:57.722072 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:57.722077 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:57.722081 | orchestrator | 2025-06-03 15:40:57.722086 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-06-03 15:40:57.722091 | orchestrator | Tuesday 03 June 2025 15:34:55 +0000 (0:00:00.347) 0:05:14.206 ********** 2025-06-03 15:40:57.722096 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:40:57.722101 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:40:57.722105 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:40:57.722110 | orchestrator | 2025-06-03 15:40:57.722115 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-06-03 15:40:57.722120 | orchestrator | Tuesday 03 June 2025 15:34:55 +0000 (0:00:00.368) 0:05:14.574 ********** 2025-06-03 15:40:57.722124 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:40:57.722129 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:40:57.722134 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:40:57.722138 | orchestrator | 2025-06-03 15:40:57.722143 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-06-03 15:40:57.722148 | orchestrator | Tuesday 03 June 2025 15:34:55 +0000 (0:00:00.313) 0:05:14.888 ********** 2025-06-03 15:40:57.722153 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:40:57.722157 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:40:57.722162 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:40:57.722167 | orchestrator | 2025-06-03 15:40:57.722172 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2025-06-03 15:40:57.722176 | orchestrator | Tuesday 03 June 2025 15:34:56 +0000 (0:00:00.780) 0:05:15.668 ********** 2025-06-03 15:40:57.722181 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-06-03 15:40:57.722186 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-03 15:40:57.722191 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-03 15:40:57.722196 | orchestrator | 2025-06-03 15:40:57.722201 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2025-06-03 15:40:57.722206 | orchestrator | Tuesday 03 June 2025 15:34:57 +0000 (0:00:00.613) 0:05:16.282 ********** 2025-06-03 15:40:57.722211 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:40:57.722216 | orchestrator | 2025-06-03 15:40:57.722221 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2025-06-03 15:40:57.722225 | orchestrator | Tuesday 03 June 2025 15:34:57 +0000 (0:00:00.519) 0:05:16.802 ********** 2025-06-03 15:40:57.722230 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:40:57.722235 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:40:57.722240 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:40:57.722244 | orchestrator | 2025-06-03 15:40:57.722249 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2025-06-03 15:40:57.722263 | orchestrator | Tuesday 03 June 2025 15:34:58 +0000 (0:00:01.061) 0:05:17.863 ********** 2025-06-03 15:40:57.722268 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:57.722272 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:57.722277 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:57.722282 | orchestrator | 2025-06-03 15:40:57.722287 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2025-06-03 15:40:57.722291 | orchestrator | Tuesday 03 June 2025 15:34:59 +0000 (0:00:00.397) 0:05:18.261 ********** 2025-06-03 15:40:57.722296 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-03 15:40:57.722302 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-03 15:40:57.722306 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-03 15:40:57.722311 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2025-06-03 15:40:57.722316 | orchestrator | 2025-06-03 15:40:57.722320 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2025-06-03 15:40:57.722325 | orchestrator | Tuesday 03 June 2025 15:35:11 +0000 (0:00:11.901) 0:05:30.163 ********** 2025-06-03 15:40:57.722330 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:40:57.722335 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:40:57.722339 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:40:57.722344 | orchestrator | 2025-06-03 15:40:57.722349 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2025-06-03 15:40:57.722354 | orchestrator | Tuesday 03 June 2025 15:35:11 +0000 (0:00:00.361) 0:05:30.524 ********** 2025-06-03 15:40:57.722358 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-06-03 15:40:57.722363 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-06-03 15:40:57.722372 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-06-03 15:40:57.722377 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-06-03 15:40:57.722382 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-03 15:40:57.722387 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-03 15:40:57.722392 | orchestrator | 2025-06-03 15:40:57.722396 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2025-06-03 15:40:57.722401 | orchestrator | Tuesday 03 June 2025 15:35:14 +0000 (0:00:02.549) 0:05:33.074 ********** 2025-06-03 15:40:57.722406 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-06-03 15:40:57.722411 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-06-03 15:40:57.722435 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-06-03 15:40:57.722441 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-03 15:40:57.722446 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-06-03 15:40:57.722450 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-06-03 15:40:57.722455 | orchestrator | 2025-06-03 15:40:57.722460 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2025-06-03 15:40:57.722465 | orchestrator | Tuesday 03 June 2025 15:35:15 +0000 (0:00:01.452) 0:05:34.527 ********** 2025-06-03 15:40:57.722469 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:40:57.722474 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:40:57.722479 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:40:57.722484 | orchestrator | 2025-06-03 15:40:57.722488 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2025-06-03 15:40:57.722493 | orchestrator | Tuesday 03 June 2025 15:35:16 +0000 (0:00:00.671) 0:05:35.198 ********** 2025-06-03 15:40:57.722498 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:57.722503 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:57.722507 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:57.722512 | orchestrator | 2025-06-03 15:40:57.722517 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2025-06-03 15:40:57.722537 | orchestrator | Tuesday 03 June 2025 15:35:16 +0000 (0:00:00.300) 0:05:35.499 ********** 2025-06-03 15:40:57.722546 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:57.722560 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:57.722568 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:57.722575 | orchestrator | 2025-06-03 15:40:57.722583 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2025-06-03 15:40:57.722590 | orchestrator | Tuesday 03 June 2025 15:35:16 +0000 (0:00:00.292) 0:05:35.791 ********** 2025-06-03 15:40:57.722595 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:40:57.722600 | orchestrator | 2025-06-03 15:40:57.722604 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2025-06-03 15:40:57.722609 | orchestrator | Tuesday 03 June 2025 15:35:17 +0000 (0:00:00.618) 0:05:36.409 ********** 2025-06-03 15:40:57.722614 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:57.722619 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:57.722623 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:57.722628 | orchestrator | 2025-06-03 15:40:57.722633 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2025-06-03 15:40:57.722638 | orchestrator | Tuesday 03 June 2025 15:35:17 +0000 (0:00:00.280) 0:05:36.690 ********** 2025-06-03 15:40:57.722642 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:57.722647 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:57.722652 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:57.722657 | orchestrator | 2025-06-03 15:40:57.722661 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2025-06-03 15:40:57.722666 | orchestrator | Tuesday 03 June 2025 15:35:17 +0000 (0:00:00.297) 0:05:36.987 ********** 2025-06-03 15:40:57.722671 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:40:57.722676 | orchestrator | 2025-06-03 15:40:57.722680 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2025-06-03 15:40:57.722685 | orchestrator | Tuesday 03 June 2025 15:35:18 +0000 (0:00:00.769) 0:05:37.757 ********** 2025-06-03 15:40:57.722690 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:40:57.722695 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:40:57.722699 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:40:57.722704 | orchestrator | 2025-06-03 15:40:57.722709 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2025-06-03 15:40:57.722714 | orchestrator | Tuesday 03 June 2025 15:35:20 +0000 (0:00:01.276) 0:05:39.034 ********** 2025-06-03 15:40:57.722718 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:40:57.722723 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:40:57.722728 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:40:57.722733 | orchestrator | 2025-06-03 15:40:57.722737 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2025-06-03 15:40:57.722742 | orchestrator | Tuesday 03 June 2025 15:35:21 +0000 (0:00:01.138) 0:05:40.173 ********** 2025-06-03 15:40:57.722747 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:40:57.722751 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:40:57.722756 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:40:57.722761 | orchestrator | 2025-06-03 15:40:57.722766 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2025-06-03 15:40:57.722770 | orchestrator | Tuesday 03 June 2025 15:35:23 +0000 (0:00:02.197) 0:05:42.370 ********** 2025-06-03 15:40:57.722775 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:40:57.722780 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:40:57.722784 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:40:57.722789 | orchestrator | 2025-06-03 15:40:57.722794 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2025-06-03 15:40:57.722799 | orchestrator | Tuesday 03 June 2025 15:35:26 +0000 (0:00:02.768) 0:05:45.138 ********** 2025-06-03 15:40:57.722804 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:57.722808 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:57.722813 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2025-06-03 15:40:57.722823 | orchestrator | 2025-06-03 15:40:57.722831 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2025-06-03 15:40:57.722836 | orchestrator | Tuesday 03 June 2025 15:35:26 +0000 (0:00:00.440) 0:05:45.579 ********** 2025-06-03 15:40:57.722841 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2025-06-03 15:40:57.722846 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2025-06-03 15:40:57.722851 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2025-06-03 15:40:57.722874 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2025-06-03 15:40:57.722879 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (26 retries left). 2025-06-03 15:40:57.722884 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (25 retries left). 2025-06-03 15:40:57.722889 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-06-03 15:40:57.722894 | orchestrator | 2025-06-03 15:40:57.722899 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2025-06-03 15:40:57.722904 | orchestrator | Tuesday 03 June 2025 15:36:02 +0000 (0:00:36.297) 0:06:21.876 ********** 2025-06-03 15:40:57.722909 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-06-03 15:40:57.722914 | orchestrator | 2025-06-03 15:40:57.722918 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2025-06-03 15:40:57.722923 | orchestrator | Tuesday 03 June 2025 15:36:04 +0000 (0:00:01.784) 0:06:23.661 ********** 2025-06-03 15:40:57.722928 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:40:57.722935 | orchestrator | 2025-06-03 15:40:57.722944 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2025-06-03 15:40:57.722951 | orchestrator | Tuesday 03 June 2025 15:36:05 +0000 (0:00:00.542) 0:06:24.204 ********** 2025-06-03 15:40:57.722959 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:40:57.722967 | orchestrator | 2025-06-03 15:40:57.722976 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2025-06-03 15:40:57.722983 | orchestrator | Tuesday 03 June 2025 15:36:05 +0000 (0:00:00.169) 0:06:24.373 ********** 2025-06-03 15:40:57.722992 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2025-06-03 15:40:57.722999 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2025-06-03 15:40:57.723004 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2025-06-03 15:40:57.723008 | orchestrator | 2025-06-03 15:40:57.723013 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2025-06-03 15:40:57.723018 | orchestrator | Tuesday 03 June 2025 15:36:12 +0000 (0:00:06.731) 0:06:31.104 ********** 2025-06-03 15:40:57.723023 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2025-06-03 15:40:57.723027 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2025-06-03 15:40:57.723032 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2025-06-03 15:40:57.723037 | orchestrator | skipping: [testbed-node-2] => (item=status)  2025-06-03 15:40:57.723042 | orchestrator | 2025-06-03 15:40:57.723047 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-06-03 15:40:57.723051 | orchestrator | Tuesday 03 June 2025 15:36:17 +0000 (0:00:04.995) 0:06:36.099 ********** 2025-06-03 15:40:57.723056 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:40:57.723061 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:40:57.723066 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:40:57.723070 | orchestrator | 2025-06-03 15:40:57.723075 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-06-03 15:40:57.723080 | orchestrator | Tuesday 03 June 2025 15:36:17 +0000 (0:00:00.791) 0:06:36.891 ********** 2025-06-03 15:40:57.723089 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:40:57.723094 | orchestrator | 2025-06-03 15:40:57.723099 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-06-03 15:40:57.723104 | orchestrator | Tuesday 03 June 2025 15:36:18 +0000 (0:00:00.455) 0:06:37.347 ********** 2025-06-03 15:40:57.723109 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:40:57.723114 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:40:57.723118 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:40:57.723123 | orchestrator | 2025-06-03 15:40:57.723128 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-06-03 15:40:57.723132 | orchestrator | Tuesday 03 June 2025 15:36:18 +0000 (0:00:00.297) 0:06:37.644 ********** 2025-06-03 15:40:57.723137 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:40:57.723142 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:40:57.723147 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:40:57.723151 | orchestrator | 2025-06-03 15:40:57.723156 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-06-03 15:40:57.723161 | orchestrator | Tuesday 03 June 2025 15:36:20 +0000 (0:00:01.452) 0:06:39.096 ********** 2025-06-03 15:40:57.723166 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-06-03 15:40:57.723170 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-06-03 15:40:57.723175 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-06-03 15:40:57.723180 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:57.723185 | orchestrator | 2025-06-03 15:40:57.723190 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-06-03 15:40:57.723194 | orchestrator | Tuesday 03 June 2025 15:36:20 +0000 (0:00:00.671) 0:06:39.768 ********** 2025-06-03 15:40:57.723199 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:40:57.723204 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:40:57.723211 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:40:57.723216 | orchestrator | 2025-06-03 15:40:57.723221 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2025-06-03 15:40:57.723226 | orchestrator | 2025-06-03 15:40:57.723231 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-06-03 15:40:57.723236 | orchestrator | Tuesday 03 June 2025 15:36:21 +0000 (0:00:00.579) 0:06:40.347 ********** 2025-06-03 15:40:57.723240 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-03 15:40:57.723245 | orchestrator | 2025-06-03 15:40:57.723250 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-06-03 15:40:57.723278 | orchestrator | Tuesday 03 June 2025 15:36:22 +0000 (0:00:00.751) 0:06:41.098 ********** 2025-06-03 15:40:57.723287 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-03 15:40:57.723294 | orchestrator | 2025-06-03 15:40:57.723302 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-06-03 15:40:57.723309 | orchestrator | Tuesday 03 June 2025 15:36:22 +0000 (0:00:00.574) 0:06:41.673 ********** 2025-06-03 15:40:57.723316 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.723323 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:57.723331 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:57.723337 | orchestrator | 2025-06-03 15:40:57.723343 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-06-03 15:40:57.723350 | orchestrator | Tuesday 03 June 2025 15:36:22 +0000 (0:00:00.291) 0:06:41.965 ********** 2025-06-03 15:40:57.723357 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:57.723364 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:57.723372 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:57.723379 | orchestrator | 2025-06-03 15:40:57.723386 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-06-03 15:40:57.723404 | orchestrator | Tuesday 03 June 2025 15:36:23 +0000 (0:00:01.010) 0:06:42.975 ********** 2025-06-03 15:40:57.723412 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:57.723419 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:57.723426 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:57.723433 | orchestrator | 2025-06-03 15:40:57.723440 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-06-03 15:40:57.723447 | orchestrator | Tuesday 03 June 2025 15:36:24 +0000 (0:00:00.710) 0:06:43.686 ********** 2025-06-03 15:40:57.723454 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:57.723461 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:57.723468 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:57.723475 | orchestrator | 2025-06-03 15:40:57.723482 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-06-03 15:40:57.723489 | orchestrator | Tuesday 03 June 2025 15:36:25 +0000 (0:00:00.660) 0:06:44.346 ********** 2025-06-03 15:40:57.723497 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.723504 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:57.723512 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:57.723519 | orchestrator | 2025-06-03 15:40:57.723541 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-06-03 15:40:57.723549 | orchestrator | Tuesday 03 June 2025 15:36:25 +0000 (0:00:00.329) 0:06:44.676 ********** 2025-06-03 15:40:57.723556 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.723563 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:57.723570 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:57.723577 | orchestrator | 2025-06-03 15:40:57.723584 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-06-03 15:40:57.723592 | orchestrator | Tuesday 03 June 2025 15:36:26 +0000 (0:00:00.562) 0:06:45.239 ********** 2025-06-03 15:40:57.723599 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.723607 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:57.723614 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:57.723621 | orchestrator | 2025-06-03 15:40:57.723629 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-06-03 15:40:57.723636 | orchestrator | Tuesday 03 June 2025 15:36:26 +0000 (0:00:00.385) 0:06:45.624 ********** 2025-06-03 15:40:57.723643 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:57.723650 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:57.723657 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:57.723665 | orchestrator | 2025-06-03 15:40:57.723672 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-06-03 15:40:57.723679 | orchestrator | Tuesday 03 June 2025 15:36:27 +0000 (0:00:00.688) 0:06:46.313 ********** 2025-06-03 15:40:57.723686 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:57.723694 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:57.723700 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:57.723708 | orchestrator | 2025-06-03 15:40:57.723715 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-06-03 15:40:57.723722 | orchestrator | Tuesday 03 June 2025 15:36:28 +0000 (0:00:00.726) 0:06:47.039 ********** 2025-06-03 15:40:57.723729 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.723736 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:57.723744 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:57.723751 | orchestrator | 2025-06-03 15:40:57.723758 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-06-03 15:40:57.723765 | orchestrator | Tuesday 03 June 2025 15:36:28 +0000 (0:00:00.575) 0:06:47.614 ********** 2025-06-03 15:40:57.723772 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.723779 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:57.723786 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:57.723794 | orchestrator | 2025-06-03 15:40:57.723801 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-06-03 15:40:57.723809 | orchestrator | Tuesday 03 June 2025 15:36:28 +0000 (0:00:00.346) 0:06:47.961 ********** 2025-06-03 15:40:57.723821 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:57.723829 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:57.723836 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:57.723843 | orchestrator | 2025-06-03 15:40:57.723850 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-06-03 15:40:57.723857 | orchestrator | Tuesday 03 June 2025 15:36:29 +0000 (0:00:00.340) 0:06:48.301 ********** 2025-06-03 15:40:57.723869 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:57.723876 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:57.723883 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:57.723889 | orchestrator | 2025-06-03 15:40:57.723895 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-06-03 15:40:57.723902 | orchestrator | Tuesday 03 June 2025 15:36:29 +0000 (0:00:00.375) 0:06:48.677 ********** 2025-06-03 15:40:57.723912 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:57.723926 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:57.723936 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:57.723943 | orchestrator | 2025-06-03 15:40:57.723950 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-06-03 15:40:57.723958 | orchestrator | Tuesday 03 June 2025 15:36:30 +0000 (0:00:00.671) 0:06:49.348 ********** 2025-06-03 15:40:57.723974 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.723982 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:57.723989 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:57.723999 | orchestrator | 2025-06-03 15:40:57.724012 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-06-03 15:40:57.724019 | orchestrator | Tuesday 03 June 2025 15:36:30 +0000 (0:00:00.354) 0:06:49.703 ********** 2025-06-03 15:40:57.724027 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.724034 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:57.724041 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:57.724049 | orchestrator | 2025-06-03 15:40:57.724056 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-06-03 15:40:57.724063 | orchestrator | Tuesday 03 June 2025 15:36:30 +0000 (0:00:00.302) 0:06:50.006 ********** 2025-06-03 15:40:57.724070 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.724077 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:57.724085 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:57.724092 | orchestrator | 2025-06-03 15:40:57.724100 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-06-03 15:40:57.724108 | orchestrator | Tuesday 03 June 2025 15:36:31 +0000 (0:00:00.344) 0:06:50.350 ********** 2025-06-03 15:40:57.724115 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:57.724123 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:57.724130 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:57.724138 | orchestrator | 2025-06-03 15:40:57.724145 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-06-03 15:40:57.724152 | orchestrator | Tuesday 03 June 2025 15:36:32 +0000 (0:00:00.696) 0:06:51.046 ********** 2025-06-03 15:40:57.724160 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:57.724167 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:57.724174 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:57.724181 | orchestrator | 2025-06-03 15:40:57.724188 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2025-06-03 15:40:57.724196 | orchestrator | Tuesday 03 June 2025 15:36:32 +0000 (0:00:00.544) 0:06:51.590 ********** 2025-06-03 15:40:57.724204 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:57.724212 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:57.724219 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:57.724226 | orchestrator | 2025-06-03 15:40:57.724233 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2025-06-03 15:40:57.724240 | orchestrator | Tuesday 03 June 2025 15:36:32 +0000 (0:00:00.313) 0:06:51.903 ********** 2025-06-03 15:40:57.724248 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-06-03 15:40:57.724263 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-03 15:40:57.724270 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-03 15:40:57.724278 | orchestrator | 2025-06-03 15:40:57.724285 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2025-06-03 15:40:57.724293 | orchestrator | Tuesday 03 June 2025 15:36:33 +0000 (0:00:00.708) 0:06:52.612 ********** 2025-06-03 15:40:57.724299 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-03 15:40:57.724303 | orchestrator | 2025-06-03 15:40:57.724308 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2025-06-03 15:40:57.724313 | orchestrator | Tuesday 03 June 2025 15:36:34 +0000 (0:00:00.625) 0:06:53.237 ********** 2025-06-03 15:40:57.724318 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.724323 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:57.724327 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:57.724332 | orchestrator | 2025-06-03 15:40:57.724337 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2025-06-03 15:40:57.724342 | orchestrator | Tuesday 03 June 2025 15:36:34 +0000 (0:00:00.251) 0:06:53.489 ********** 2025-06-03 15:40:57.724347 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.724352 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:57.724356 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:57.724361 | orchestrator | 2025-06-03 15:40:57.724369 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2025-06-03 15:40:57.724377 | orchestrator | Tuesday 03 June 2025 15:36:34 +0000 (0:00:00.250) 0:06:53.740 ********** 2025-06-03 15:40:57.724384 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:57.724392 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:57.724399 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:57.724406 | orchestrator | 2025-06-03 15:40:57.724413 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2025-06-03 15:40:57.724421 | orchestrator | Tuesday 03 June 2025 15:36:35 +0000 (0:00:00.785) 0:06:54.526 ********** 2025-06-03 15:40:57.724428 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:57.724434 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:57.724442 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:57.724450 | orchestrator | 2025-06-03 15:40:57.724457 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2025-06-03 15:40:57.724465 | orchestrator | Tuesday 03 June 2025 15:36:35 +0000 (0:00:00.320) 0:06:54.846 ********** 2025-06-03 15:40:57.724473 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-06-03 15:40:57.724488 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-06-03 15:40:57.724496 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-06-03 15:40:57.724503 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-06-03 15:40:57.724511 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-06-03 15:40:57.724518 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-06-03 15:40:57.724542 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-06-03 15:40:57.724563 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-06-03 15:40:57.724572 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-06-03 15:40:57.724580 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-06-03 15:40:57.724588 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-06-03 15:40:57.724595 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-06-03 15:40:57.724610 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-06-03 15:40:57.724618 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-06-03 15:40:57.724625 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-06-03 15:40:57.724634 | orchestrator | 2025-06-03 15:40:57.724642 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2025-06-03 15:40:57.724651 | orchestrator | Tuesday 03 June 2025 15:36:38 +0000 (0:00:02.973) 0:06:57.820 ********** 2025-06-03 15:40:57.724659 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.724668 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:57.724677 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:57.724685 | orchestrator | 2025-06-03 15:40:57.724694 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2025-06-03 15:40:57.724703 | orchestrator | Tuesday 03 June 2025 15:36:39 +0000 (0:00:00.315) 0:06:58.136 ********** 2025-06-03 15:40:57.724711 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-03 15:40:57.724720 | orchestrator | 2025-06-03 15:40:57.724728 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2025-06-03 15:40:57.724737 | orchestrator | Tuesday 03 June 2025 15:36:39 +0000 (0:00:00.645) 0:06:58.782 ********** 2025-06-03 15:40:57.724745 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2025-06-03 15:40:57.724754 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2025-06-03 15:40:57.724762 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2025-06-03 15:40:57.724771 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2025-06-03 15:40:57.724779 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2025-06-03 15:40:57.724787 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2025-06-03 15:40:57.724796 | orchestrator | 2025-06-03 15:40:57.724805 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2025-06-03 15:40:57.724813 | orchestrator | Tuesday 03 June 2025 15:36:40 +0000 (0:00:01.009) 0:06:59.791 ********** 2025-06-03 15:40:57.724823 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-03 15:40:57.724833 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-06-03 15:40:57.724842 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-06-03 15:40:57.724850 | orchestrator | 2025-06-03 15:40:57.724859 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2025-06-03 15:40:57.724867 | orchestrator | Tuesday 03 June 2025 15:36:42 +0000 (0:00:02.097) 0:07:01.888 ********** 2025-06-03 15:40:57.724877 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-06-03 15:40:57.724885 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-06-03 15:40:57.724893 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:40:57.724901 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-06-03 15:40:57.724911 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-06-03 15:40:57.724920 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:40:57.724929 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-06-03 15:40:57.724937 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-06-03 15:40:57.724945 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:40:57.724953 | orchestrator | 2025-06-03 15:40:57.724961 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2025-06-03 15:40:57.724970 | orchestrator | Tuesday 03 June 2025 15:36:43 +0000 (0:00:01.121) 0:07:03.010 ********** 2025-06-03 15:40:57.724979 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-06-03 15:40:57.724987 | orchestrator | 2025-06-03 15:40:57.724995 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2025-06-03 15:40:57.725003 | orchestrator | Tuesday 03 June 2025 15:36:46 +0000 (0:00:02.539) 0:07:05.550 ********** 2025-06-03 15:40:57.725019 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-03 15:40:57.725028 | orchestrator | 2025-06-03 15:40:57.725036 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2025-06-03 15:40:57.725045 | orchestrator | Tuesday 03 June 2025 15:36:46 +0000 (0:00:00.463) 0:07:06.013 ********** 2025-06-03 15:40:57.725060 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-5a262827-4eba-5d37-ab06-09e1d49a835c', 'data_vg': 'ceph-5a262827-4eba-5d37-ab06-09e1d49a835c'}) 2025-06-03 15:40:57.725069 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-f00e4ac9-9831-582f-92bc-f2b318630797', 'data_vg': 'ceph-f00e4ac9-9831-582f-92bc-f2b318630797'}) 2025-06-03 15:40:57.725076 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-610c71bb-335d-5813-8d53-12327c30775e', 'data_vg': 'ceph-610c71bb-335d-5813-8d53-12327c30775e'}) 2025-06-03 15:40:57.725092 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-d47078ac-4564-569b-bfa7-6d988d420f95', 'data_vg': 'ceph-d47078ac-4564-569b-bfa7-6d988d420f95'}) 2025-06-03 15:40:57.725100 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-ae8860ce-b651-5449-9c0b-e6c018225b94', 'data_vg': 'ceph-ae8860ce-b651-5449-9c0b-e6c018225b94'}) 2025-06-03 15:40:57.725108 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-2547461e-5dcb-5046-b3ed-0a182c83d3a8', 'data_vg': 'ceph-2547461e-5dcb-5046-b3ed-0a182c83d3a8'}) 2025-06-03 15:40:57.725115 | orchestrator | 2025-06-03 15:40:57.725123 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2025-06-03 15:40:57.725130 | orchestrator | Tuesday 03 June 2025 15:37:30 +0000 (0:00:43.585) 0:07:49.599 ********** 2025-06-03 15:40:57.725138 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.725146 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:57.725154 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:57.725162 | orchestrator | 2025-06-03 15:40:57.725170 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2025-06-03 15:40:57.725178 | orchestrator | Tuesday 03 June 2025 15:37:31 +0000 (0:00:00.534) 0:07:50.134 ********** 2025-06-03 15:40:57.725183 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-03 15:40:57.725188 | orchestrator | 2025-06-03 15:40:57.725192 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2025-06-03 15:40:57.725197 | orchestrator | Tuesday 03 June 2025 15:37:31 +0000 (0:00:00.517) 0:07:50.652 ********** 2025-06-03 15:40:57.725202 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:57.725207 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:57.725211 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:57.725219 | orchestrator | 2025-06-03 15:40:57.725226 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2025-06-03 15:40:57.725234 | orchestrator | Tuesday 03 June 2025 15:37:32 +0000 (0:00:00.730) 0:07:51.382 ********** 2025-06-03 15:40:57.725241 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:57.725249 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:57.725257 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:57.725265 | orchestrator | 2025-06-03 15:40:57.725273 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2025-06-03 15:40:57.725281 | orchestrator | Tuesday 03 June 2025 15:37:35 +0000 (0:00:03.021) 0:07:54.403 ********** 2025-06-03 15:40:57.725290 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-03 15:40:57.725295 | orchestrator | 2025-06-03 15:40:57.725300 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2025-06-03 15:40:57.725305 | orchestrator | Tuesday 03 June 2025 15:37:35 +0000 (0:00:00.510) 0:07:54.913 ********** 2025-06-03 15:40:57.725309 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:40:57.725314 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:40:57.725324 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:40:57.725329 | orchestrator | 2025-06-03 15:40:57.725333 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2025-06-03 15:40:57.725338 | orchestrator | Tuesday 03 June 2025 15:37:37 +0000 (0:00:01.227) 0:07:56.141 ********** 2025-06-03 15:40:57.725343 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:40:57.725348 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:40:57.725353 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:40:57.725357 | orchestrator | 2025-06-03 15:40:57.725362 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2025-06-03 15:40:57.725367 | orchestrator | Tuesday 03 June 2025 15:37:38 +0000 (0:00:01.491) 0:07:57.633 ********** 2025-06-03 15:40:57.725372 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:40:57.725376 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:40:57.725381 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:40:57.725386 | orchestrator | 2025-06-03 15:40:57.725391 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2025-06-03 15:40:57.725395 | orchestrator | Tuesday 03 June 2025 15:37:40 +0000 (0:00:01.860) 0:07:59.493 ********** 2025-06-03 15:40:57.725400 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.725405 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:57.725410 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:57.725414 | orchestrator | 2025-06-03 15:40:57.725419 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2025-06-03 15:40:57.725424 | orchestrator | Tuesday 03 June 2025 15:37:40 +0000 (0:00:00.349) 0:07:59.843 ********** 2025-06-03 15:40:57.725429 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.725433 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:57.725438 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:57.725443 | orchestrator | 2025-06-03 15:40:57.725448 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2025-06-03 15:40:57.725452 | orchestrator | Tuesday 03 June 2025 15:37:41 +0000 (0:00:00.297) 0:08:00.141 ********** 2025-06-03 15:40:57.725457 | orchestrator | ok: [testbed-node-3] => (item=3) 2025-06-03 15:40:57.725462 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-06-03 15:40:57.725467 | orchestrator | ok: [testbed-node-5] => (item=5) 2025-06-03 15:40:57.725471 | orchestrator | ok: [testbed-node-3] => (item=1) 2025-06-03 15:40:57.725476 | orchestrator | ok: [testbed-node-4] => (item=4) 2025-06-03 15:40:57.725481 | orchestrator | ok: [testbed-node-5] => (item=2) 2025-06-03 15:40:57.725485 | orchestrator | 2025-06-03 15:40:57.725490 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2025-06-03 15:40:57.725498 | orchestrator | Tuesday 03 June 2025 15:37:42 +0000 (0:00:01.304) 0:08:01.445 ********** 2025-06-03 15:40:57.725503 | orchestrator | changed: [testbed-node-3] => (item=3) 2025-06-03 15:40:57.725508 | orchestrator | changed: [testbed-node-4] => (item=0) 2025-06-03 15:40:57.725513 | orchestrator | changed: [testbed-node-5] => (item=5) 2025-06-03 15:40:57.725518 | orchestrator | changed: [testbed-node-3] => (item=1) 2025-06-03 15:40:57.725561 | orchestrator | changed: [testbed-node-4] => (item=4) 2025-06-03 15:40:57.725567 | orchestrator | changed: [testbed-node-5] => (item=2) 2025-06-03 15:40:57.725572 | orchestrator | 2025-06-03 15:40:57.725577 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2025-06-03 15:40:57.725587 | orchestrator | Tuesday 03 June 2025 15:37:44 +0000 (0:00:02.146) 0:08:03.592 ********** 2025-06-03 15:40:57.725592 | orchestrator | changed: [testbed-node-3] => (item=3) 2025-06-03 15:40:57.725597 | orchestrator | changed: [testbed-node-4] => (item=0) 2025-06-03 15:40:57.725602 | orchestrator | changed: [testbed-node-5] => (item=5) 2025-06-03 15:40:57.725606 | orchestrator | changed: [testbed-node-3] => (item=1) 2025-06-03 15:40:57.725611 | orchestrator | changed: [testbed-node-4] => (item=4) 2025-06-03 15:40:57.725616 | orchestrator | changed: [testbed-node-5] => (item=2) 2025-06-03 15:40:57.725621 | orchestrator | 2025-06-03 15:40:57.725625 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2025-06-03 15:40:57.725635 | orchestrator | Tuesday 03 June 2025 15:37:48 +0000 (0:00:03.657) 0:08:07.249 ********** 2025-06-03 15:40:57.725640 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.725644 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:57.725649 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-06-03 15:40:57.725654 | orchestrator | 2025-06-03 15:40:57.725659 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2025-06-03 15:40:57.725664 | orchestrator | Tuesday 03 June 2025 15:37:50 +0000 (0:00:02.583) 0:08:09.832 ********** 2025-06-03 15:40:57.725668 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.725673 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:57.725678 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2025-06-03 15:40:57.725683 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-06-03 15:40:57.725688 | orchestrator | 2025-06-03 15:40:57.725693 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2025-06-03 15:40:57.725698 | orchestrator | Tuesday 03 June 2025 15:38:04 +0000 (0:00:13.241) 0:08:23.074 ********** 2025-06-03 15:40:57.725702 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.725707 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:57.725712 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:57.725717 | orchestrator | 2025-06-03 15:40:57.725721 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-06-03 15:40:57.725726 | orchestrator | Tuesday 03 June 2025 15:38:04 +0000 (0:00:00.919) 0:08:23.993 ********** 2025-06-03 15:40:57.725731 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.725736 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:57.725741 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:57.725745 | orchestrator | 2025-06-03 15:40:57.725750 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-06-03 15:40:57.725757 | orchestrator | Tuesday 03 June 2025 15:38:05 +0000 (0:00:00.611) 0:08:24.605 ********** 2025-06-03 15:40:57.725765 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-03 15:40:57.725772 | orchestrator | 2025-06-03 15:40:57.725780 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-06-03 15:40:57.725787 | orchestrator | Tuesday 03 June 2025 15:38:06 +0000 (0:00:00.521) 0:08:25.127 ********** 2025-06-03 15:40:57.725795 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-03 15:40:57.725802 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-03 15:40:57.725808 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-03 15:40:57.725815 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.725821 | orchestrator | 2025-06-03 15:40:57.725828 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-06-03 15:40:57.725834 | orchestrator | Tuesday 03 June 2025 15:38:06 +0000 (0:00:00.392) 0:08:25.519 ********** 2025-06-03 15:40:57.725841 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.725847 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:57.725854 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:57.725862 | orchestrator | 2025-06-03 15:40:57.725877 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-06-03 15:40:57.725888 | orchestrator | Tuesday 03 June 2025 15:38:06 +0000 (0:00:00.323) 0:08:25.842 ********** 2025-06-03 15:40:57.725895 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.725902 | orchestrator | 2025-06-03 15:40:57.725912 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-06-03 15:40:57.725924 | orchestrator | Tuesday 03 June 2025 15:38:07 +0000 (0:00:00.224) 0:08:26.067 ********** 2025-06-03 15:40:57.725932 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.725939 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:57.725946 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:57.725961 | orchestrator | 2025-06-03 15:40:57.725968 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-06-03 15:40:57.725975 | orchestrator | Tuesday 03 June 2025 15:38:07 +0000 (0:00:00.598) 0:08:26.665 ********** 2025-06-03 15:40:57.725981 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.725988 | orchestrator | 2025-06-03 15:40:57.725995 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-06-03 15:40:57.726002 | orchestrator | Tuesday 03 June 2025 15:38:07 +0000 (0:00:00.255) 0:08:26.921 ********** 2025-06-03 15:40:57.726010 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.726042 | orchestrator | 2025-06-03 15:40:57.726051 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-06-03 15:40:57.726063 | orchestrator | Tuesday 03 June 2025 15:38:08 +0000 (0:00:00.255) 0:08:27.177 ********** 2025-06-03 15:40:57.726072 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.726079 | orchestrator | 2025-06-03 15:40:57.726089 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-06-03 15:40:57.726097 | orchestrator | Tuesday 03 June 2025 15:38:08 +0000 (0:00:00.129) 0:08:27.306 ********** 2025-06-03 15:40:57.726103 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.726111 | orchestrator | 2025-06-03 15:40:57.726118 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-06-03 15:40:57.726125 | orchestrator | Tuesday 03 June 2025 15:38:08 +0000 (0:00:00.219) 0:08:27.525 ********** 2025-06-03 15:40:57.726133 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.726141 | orchestrator | 2025-06-03 15:40:57.726158 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-06-03 15:40:57.726166 | orchestrator | Tuesday 03 June 2025 15:38:08 +0000 (0:00:00.226) 0:08:27.752 ********** 2025-06-03 15:40:57.726173 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-03 15:40:57.726179 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-03 15:40:57.726186 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-03 15:40:57.726192 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.726199 | orchestrator | 2025-06-03 15:40:57.726205 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-06-03 15:40:57.726212 | orchestrator | Tuesday 03 June 2025 15:38:09 +0000 (0:00:00.400) 0:08:28.152 ********** 2025-06-03 15:40:57.726219 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.726226 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:57.726233 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:57.726241 | orchestrator | 2025-06-03 15:40:57.726248 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-06-03 15:40:57.726255 | orchestrator | Tuesday 03 June 2025 15:38:09 +0000 (0:00:00.428) 0:08:28.580 ********** 2025-06-03 15:40:57.726262 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.726269 | orchestrator | 2025-06-03 15:40:57.726277 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-06-03 15:40:57.726285 | orchestrator | Tuesday 03 June 2025 15:38:10 +0000 (0:00:00.781) 0:08:29.362 ********** 2025-06-03 15:40:57.726293 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.726301 | orchestrator | 2025-06-03 15:40:57.726308 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2025-06-03 15:40:57.726315 | orchestrator | 2025-06-03 15:40:57.726321 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-06-03 15:40:57.726328 | orchestrator | Tuesday 03 June 2025 15:38:11 +0000 (0:00:00.727) 0:08:30.089 ********** 2025-06-03 15:40:57.726337 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-03 15:40:57.726345 | orchestrator | 2025-06-03 15:40:57.726353 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-06-03 15:40:57.726360 | orchestrator | Tuesday 03 June 2025 15:38:12 +0000 (0:00:01.228) 0:08:31.318 ********** 2025-06-03 15:40:57.726376 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-03 15:40:57.726384 | orchestrator | 2025-06-03 15:40:57.726391 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-06-03 15:40:57.726399 | orchestrator | Tuesday 03 June 2025 15:38:13 +0000 (0:00:01.294) 0:08:32.612 ********** 2025-06-03 15:40:57.726406 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.726414 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:40:57.726422 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:57.726430 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:40:57.726437 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:40:57.726445 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:57.726452 | orchestrator | 2025-06-03 15:40:57.726460 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-06-03 15:40:57.726468 | orchestrator | Tuesday 03 June 2025 15:38:14 +0000 (0:00:01.011) 0:08:33.624 ********** 2025-06-03 15:40:57.726475 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:57.726483 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:57.726491 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:57.726498 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:57.726506 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:57.726514 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:57.726521 | orchestrator | 2025-06-03 15:40:57.726546 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-06-03 15:40:57.726554 | orchestrator | Tuesday 03 June 2025 15:38:15 +0000 (0:00:01.043) 0:08:34.668 ********** 2025-06-03 15:40:57.726560 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:57.726568 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:57.726576 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:57.726584 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:57.726592 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:57.726599 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:57.726607 | orchestrator | 2025-06-03 15:40:57.726615 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-06-03 15:40:57.726623 | orchestrator | Tuesday 03 June 2025 15:38:16 +0000 (0:00:01.302) 0:08:35.970 ********** 2025-06-03 15:40:57.726631 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:57.726639 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:57.726647 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:57.726655 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:57.726663 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:57.726671 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:57.726679 | orchestrator | 2025-06-03 15:40:57.726686 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-06-03 15:40:57.726694 | orchestrator | Tuesday 03 June 2025 15:38:18 +0000 (0:00:01.074) 0:08:37.044 ********** 2025-06-03 15:40:57.726702 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.726710 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:40:57.726718 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:57.726726 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:57.726740 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:40:57.726748 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:40:57.726756 | orchestrator | 2025-06-03 15:40:57.726764 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-06-03 15:40:57.726772 | orchestrator | Tuesday 03 June 2025 15:38:19 +0000 (0:00:00.971) 0:08:38.016 ********** 2025-06-03 15:40:57.726780 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:57.726788 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:57.726796 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:57.726803 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.726811 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:57.726818 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:57.726828 | orchestrator | 2025-06-03 15:40:57.726852 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-06-03 15:40:57.726860 | orchestrator | Tuesday 03 June 2025 15:38:19 +0000 (0:00:00.629) 0:08:38.645 ********** 2025-06-03 15:40:57.726868 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:57.726877 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:57.726885 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:57.726893 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.726900 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:57.726907 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:57.726914 | orchestrator | 2025-06-03 15:40:57.726922 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-06-03 15:40:57.726929 | orchestrator | Tuesday 03 June 2025 15:38:20 +0000 (0:00:00.834) 0:08:39.480 ********** 2025-06-03 15:40:57.726936 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:40:57.726944 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:40:57.726951 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:40:57.726958 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:57.726965 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:57.726972 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:57.726979 | orchestrator | 2025-06-03 15:40:57.726987 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-06-03 15:40:57.726994 | orchestrator | Tuesday 03 June 2025 15:38:21 +0000 (0:00:01.011) 0:08:40.492 ********** 2025-06-03 15:40:57.727001 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:40:57.727008 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:40:57.727017 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:40:57.727022 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:57.727026 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:57.727030 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:57.727035 | orchestrator | 2025-06-03 15:40:57.727040 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-06-03 15:40:57.727047 | orchestrator | Tuesday 03 June 2025 15:38:22 +0000 (0:00:01.232) 0:08:41.724 ********** 2025-06-03 15:40:57.727054 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:57.727062 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:57.727069 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:57.727077 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.727084 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:57.727092 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:57.727099 | orchestrator | 2025-06-03 15:40:57.727107 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-06-03 15:40:57.727114 | orchestrator | Tuesday 03 June 2025 15:38:23 +0000 (0:00:00.606) 0:08:42.331 ********** 2025-06-03 15:40:57.727122 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:40:57.727128 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:40:57.727133 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:40:57.727137 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.727142 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:57.727146 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:57.727151 | orchestrator | 2025-06-03 15:40:57.727155 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-06-03 15:40:57.727160 | orchestrator | Tuesday 03 June 2025 15:38:24 +0000 (0:00:00.821) 0:08:43.152 ********** 2025-06-03 15:40:57.727165 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:57.727169 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:57.727174 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:57.727178 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:57.727183 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:57.727187 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:57.727192 | orchestrator | 2025-06-03 15:40:57.727196 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-06-03 15:40:57.727201 | orchestrator | Tuesday 03 June 2025 15:38:24 +0000 (0:00:00.638) 0:08:43.790 ********** 2025-06-03 15:40:57.727205 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:57.727210 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:57.727219 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:57.727224 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:57.727228 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:57.727233 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:57.727238 | orchestrator | 2025-06-03 15:40:57.727242 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-06-03 15:40:57.727247 | orchestrator | Tuesday 03 June 2025 15:38:25 +0000 (0:00:00.844) 0:08:44.635 ********** 2025-06-03 15:40:57.727251 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:57.727256 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:57.727260 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:57.727265 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:57.727269 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:57.727274 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:57.727278 | orchestrator | 2025-06-03 15:40:57.727283 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-06-03 15:40:57.727287 | orchestrator | Tuesday 03 June 2025 15:38:26 +0000 (0:00:00.607) 0:08:45.242 ********** 2025-06-03 15:40:57.727292 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:57.727296 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:57.727301 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:57.727306 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.727310 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:57.727315 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:57.727319 | orchestrator | 2025-06-03 15:40:57.727324 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-06-03 15:40:57.727328 | orchestrator | Tuesday 03 June 2025 15:38:26 +0000 (0:00:00.646) 0:08:45.889 ********** 2025-06-03 15:40:57.727333 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:40:57.727337 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:40:57.727345 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:40:57.727350 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.727355 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:57.727359 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:57.727364 | orchestrator | 2025-06-03 15:40:57.727368 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-06-03 15:40:57.727373 | orchestrator | Tuesday 03 June 2025 15:38:27 +0000 (0:00:00.509) 0:08:46.398 ********** 2025-06-03 15:40:57.727377 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:40:57.727382 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:40:57.727386 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:40:57.727391 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.727396 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:57.727400 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:57.727405 | orchestrator | 2025-06-03 15:40:57.727414 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-06-03 15:40:57.727420 | orchestrator | Tuesday 03 June 2025 15:38:28 +0000 (0:00:00.682) 0:08:47.080 ********** 2025-06-03 15:40:57.727427 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:40:57.727435 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:40:57.727442 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:40:57.727449 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:57.727457 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:57.727464 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:57.727470 | orchestrator | 2025-06-03 15:40:57.727477 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-06-03 15:40:57.727485 | orchestrator | Tuesday 03 June 2025 15:38:28 +0000 (0:00:00.565) 0:08:47.646 ********** 2025-06-03 15:40:57.727492 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:40:57.727500 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:40:57.727508 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:40:57.727515 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:57.727539 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:57.727545 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:57.727554 | orchestrator | 2025-06-03 15:40:57.727559 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2025-06-03 15:40:57.727563 | orchestrator | Tuesday 03 June 2025 15:38:29 +0000 (0:00:01.056) 0:08:48.702 ********** 2025-06-03 15:40:57.727568 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:40:57.727572 | orchestrator | 2025-06-03 15:40:57.727577 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2025-06-03 15:40:57.727581 | orchestrator | Tuesday 03 June 2025 15:38:33 +0000 (0:00:04.134) 0:08:52.836 ********** 2025-06-03 15:40:57.727586 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:40:57.727590 | orchestrator | 2025-06-03 15:40:57.727595 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2025-06-03 15:40:57.727599 | orchestrator | Tuesday 03 June 2025 15:38:35 +0000 (0:00:01.999) 0:08:54.835 ********** 2025-06-03 15:40:57.727604 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:40:57.727608 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:40:57.727613 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:40:57.727617 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:40:57.727622 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:40:57.727626 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:40:57.727631 | orchestrator | 2025-06-03 15:40:57.727635 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2025-06-03 15:40:57.727640 | orchestrator | Tuesday 03 June 2025 15:38:37 +0000 (0:00:01.942) 0:08:56.778 ********** 2025-06-03 15:40:57.727644 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:40:57.727649 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:40:57.727653 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:40:57.727657 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:40:57.727662 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:40:57.727667 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:40:57.727671 | orchestrator | 2025-06-03 15:40:57.727676 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2025-06-03 15:40:57.727680 | orchestrator | Tuesday 03 June 2025 15:38:38 +0000 (0:00:01.083) 0:08:57.862 ********** 2025-06-03 15:40:57.727685 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-03 15:40:57.727691 | orchestrator | 2025-06-03 15:40:57.727696 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2025-06-03 15:40:57.727701 | orchestrator | Tuesday 03 June 2025 15:38:40 +0000 (0:00:01.270) 0:08:59.132 ********** 2025-06-03 15:40:57.727708 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:40:57.727715 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:40:57.727722 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:40:57.727729 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:40:57.727737 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:40:57.727744 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:40:57.727752 | orchestrator | 2025-06-03 15:40:57.727760 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2025-06-03 15:40:57.727767 | orchestrator | Tuesday 03 June 2025 15:38:42 +0000 (0:00:01.990) 0:09:01.122 ********** 2025-06-03 15:40:57.727775 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:40:57.727782 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:40:57.727789 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:40:57.727797 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:40:57.727805 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:40:57.727812 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:40:57.727820 | orchestrator | 2025-06-03 15:40:57.727827 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2025-06-03 15:40:57.727834 | orchestrator | Tuesday 03 June 2025 15:38:45 +0000 (0:00:03.510) 0:09:04.633 ********** 2025-06-03 15:40:57.727841 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-03 15:40:57.727853 | orchestrator | 2025-06-03 15:40:57.727861 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2025-06-03 15:40:57.727868 | orchestrator | Tuesday 03 June 2025 15:38:46 +0000 (0:00:01.079) 0:09:05.712 ********** 2025-06-03 15:40:57.727876 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:40:57.727883 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:40:57.727890 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:40:57.727898 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:57.727906 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:57.727913 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:57.727920 | orchestrator | 2025-06-03 15:40:57.727928 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2025-06-03 15:40:57.727935 | orchestrator | Tuesday 03 June 2025 15:38:47 +0000 (0:00:00.715) 0:09:06.427 ********** 2025-06-03 15:40:57.727942 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:40:57.727950 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:40:57.727956 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:40:57.727964 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:40:57.727971 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:40:57.727979 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:40:57.727986 | orchestrator | 2025-06-03 15:40:57.727994 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2025-06-03 15:40:57.728013 | orchestrator | Tuesday 03 June 2025 15:38:49 +0000 (0:00:02.305) 0:09:08.732 ********** 2025-06-03 15:40:57.728021 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:40:57.728028 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:40:57.728036 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:40:57.728043 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:57.728051 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:57.728058 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:57.728066 | orchestrator | 2025-06-03 15:40:57.728073 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2025-06-03 15:40:57.728081 | orchestrator | 2025-06-03 15:40:57.728089 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-06-03 15:40:57.728097 | orchestrator | Tuesday 03 June 2025 15:38:50 +0000 (0:00:00.933) 0:09:09.665 ********** 2025-06-03 15:40:57.728105 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-03 15:40:57.728112 | orchestrator | 2025-06-03 15:40:57.728120 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-06-03 15:40:57.728128 | orchestrator | Tuesday 03 June 2025 15:38:51 +0000 (0:00:00.458) 0:09:10.124 ********** 2025-06-03 15:40:57.728135 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-03 15:40:57.728143 | orchestrator | 2025-06-03 15:40:57.728151 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-06-03 15:40:57.728158 | orchestrator | Tuesday 03 June 2025 15:38:51 +0000 (0:00:00.624) 0:09:10.748 ********** 2025-06-03 15:40:57.728246 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.728272 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:57.728280 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:57.728287 | orchestrator | 2025-06-03 15:40:57.728295 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-06-03 15:40:57.728303 | orchestrator | Tuesday 03 June 2025 15:38:52 +0000 (0:00:00.296) 0:09:11.045 ********** 2025-06-03 15:40:57.728311 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:57.728318 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:57.728326 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:57.728333 | orchestrator | 2025-06-03 15:40:57.728341 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-06-03 15:40:57.728349 | orchestrator | Tuesday 03 June 2025 15:38:52 +0000 (0:00:00.712) 0:09:11.757 ********** 2025-06-03 15:40:57.728356 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:57.728364 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:57.728377 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:57.728386 | orchestrator | 2025-06-03 15:40:57.728394 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-06-03 15:40:57.728402 | orchestrator | Tuesday 03 June 2025 15:38:53 +0000 (0:00:00.910) 0:09:12.667 ********** 2025-06-03 15:40:57.728410 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:57.728418 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:57.728426 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:57.728433 | orchestrator | 2025-06-03 15:40:57.728442 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-06-03 15:40:57.728450 | orchestrator | Tuesday 03 June 2025 15:38:54 +0000 (0:00:00.774) 0:09:13.441 ********** 2025-06-03 15:40:57.728457 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.728464 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:57.728471 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:57.728478 | orchestrator | 2025-06-03 15:40:57.728486 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-06-03 15:40:57.728493 | orchestrator | Tuesday 03 June 2025 15:38:54 +0000 (0:00:00.270) 0:09:13.712 ********** 2025-06-03 15:40:57.728501 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.728508 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:57.728516 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:57.728565 | orchestrator | 2025-06-03 15:40:57.728571 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-06-03 15:40:57.728576 | orchestrator | Tuesday 03 June 2025 15:38:55 +0000 (0:00:00.330) 0:09:14.043 ********** 2025-06-03 15:40:57.728580 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.728585 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:57.728589 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:57.728594 | orchestrator | 2025-06-03 15:40:57.728598 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-06-03 15:40:57.728603 | orchestrator | Tuesday 03 June 2025 15:38:55 +0000 (0:00:00.632) 0:09:14.675 ********** 2025-06-03 15:40:57.728607 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:57.728612 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:57.728616 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:57.728620 | orchestrator | 2025-06-03 15:40:57.728624 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-06-03 15:40:57.728628 | orchestrator | Tuesday 03 June 2025 15:38:56 +0000 (0:00:00.778) 0:09:15.453 ********** 2025-06-03 15:40:57.728632 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:57.728637 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:57.728641 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:57.728645 | orchestrator | 2025-06-03 15:40:57.728652 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-06-03 15:40:57.728659 | orchestrator | Tuesday 03 June 2025 15:38:57 +0000 (0:00:00.844) 0:09:16.297 ********** 2025-06-03 15:40:57.728666 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.728678 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:57.728685 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:57.728692 | orchestrator | 2025-06-03 15:40:57.728699 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-06-03 15:40:57.728706 | orchestrator | Tuesday 03 June 2025 15:38:57 +0000 (0:00:00.400) 0:09:16.698 ********** 2025-06-03 15:40:57.728714 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.728721 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:57.728728 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:57.728736 | orchestrator | 2025-06-03 15:40:57.728741 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-06-03 15:40:57.728746 | orchestrator | Tuesday 03 June 2025 15:38:58 +0000 (0:00:00.602) 0:09:17.300 ********** 2025-06-03 15:40:57.728756 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:57.728760 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:57.728764 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:57.728768 | orchestrator | 2025-06-03 15:40:57.728777 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-06-03 15:40:57.728782 | orchestrator | Tuesday 03 June 2025 15:38:58 +0000 (0:00:00.374) 0:09:17.674 ********** 2025-06-03 15:40:57.728786 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:57.728790 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:57.728794 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:57.728798 | orchestrator | 2025-06-03 15:40:57.728802 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-06-03 15:40:57.728806 | orchestrator | Tuesday 03 June 2025 15:38:59 +0000 (0:00:00.458) 0:09:18.133 ********** 2025-06-03 15:40:57.728811 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:57.728815 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:57.728819 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:57.728823 | orchestrator | 2025-06-03 15:40:57.728827 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-06-03 15:40:57.728831 | orchestrator | Tuesday 03 June 2025 15:38:59 +0000 (0:00:00.444) 0:09:18.578 ********** 2025-06-03 15:40:57.728835 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.728839 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:57.728843 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:57.728847 | orchestrator | 2025-06-03 15:40:57.728851 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-06-03 15:40:57.728855 | orchestrator | Tuesday 03 June 2025 15:39:00 +0000 (0:00:00.609) 0:09:19.188 ********** 2025-06-03 15:40:57.728859 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.728863 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:57.728868 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:57.728872 | orchestrator | 2025-06-03 15:40:57.728876 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-06-03 15:40:57.728880 | orchestrator | Tuesday 03 June 2025 15:39:00 +0000 (0:00:00.425) 0:09:19.613 ********** 2025-06-03 15:40:57.728884 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.728888 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:57.728895 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:57.728901 | orchestrator | 2025-06-03 15:40:57.728908 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-06-03 15:40:57.728915 | orchestrator | Tuesday 03 June 2025 15:39:01 +0000 (0:00:00.497) 0:09:20.111 ********** 2025-06-03 15:40:57.728921 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:57.728927 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:57.728933 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:57.728940 | orchestrator | 2025-06-03 15:40:57.728947 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-06-03 15:40:57.728953 | orchestrator | Tuesday 03 June 2025 15:39:01 +0000 (0:00:00.498) 0:09:20.609 ********** 2025-06-03 15:40:57.728960 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:57.728967 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:57.728974 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:57.728980 | orchestrator | 2025-06-03 15:40:57.728987 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2025-06-03 15:40:57.728994 | orchestrator | Tuesday 03 June 2025 15:39:02 +0000 (0:00:01.004) 0:09:21.614 ********** 2025-06-03 15:40:57.729000 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:57.729007 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:57.729014 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2025-06-03 15:40:57.729021 | orchestrator | 2025-06-03 15:40:57.729028 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2025-06-03 15:40:57.729034 | orchestrator | Tuesday 03 June 2025 15:39:03 +0000 (0:00:00.558) 0:09:22.172 ********** 2025-06-03 15:40:57.729041 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-06-03 15:40:57.729048 | orchestrator | 2025-06-03 15:40:57.729055 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2025-06-03 15:40:57.729062 | orchestrator | Tuesday 03 June 2025 15:39:05 +0000 (0:00:02.605) 0:09:24.777 ********** 2025-06-03 15:40:57.729077 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2025-06-03 15:40:57.729086 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.729093 | orchestrator | 2025-06-03 15:40:57.729099 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2025-06-03 15:40:57.729106 | orchestrator | Tuesday 03 June 2025 15:39:05 +0000 (0:00:00.205) 0:09:24.982 ********** 2025-06-03 15:40:57.729116 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-06-03 15:40:57.729140 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-06-03 15:40:57.729148 | orchestrator | 2025-06-03 15:40:57.729155 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2025-06-03 15:40:57.729162 | orchestrator | Tuesday 03 June 2025 15:39:15 +0000 (0:00:09.867) 0:09:34.850 ********** 2025-06-03 15:40:57.729169 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-06-03 15:40:57.729176 | orchestrator | 2025-06-03 15:40:57.729183 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2025-06-03 15:40:57.729190 | orchestrator | Tuesday 03 June 2025 15:39:19 +0000 (0:00:03.779) 0:09:38.630 ********** 2025-06-03 15:40:57.729201 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-03 15:40:57.729208 | orchestrator | 2025-06-03 15:40:57.729215 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2025-06-03 15:40:57.729222 | orchestrator | Tuesday 03 June 2025 15:39:20 +0000 (0:00:00.592) 0:09:39.222 ********** 2025-06-03 15:40:57.729229 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2025-06-03 15:40:57.729236 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2025-06-03 15:40:57.729243 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2025-06-03 15:40:57.729249 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2025-06-03 15:40:57.729257 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2025-06-03 15:40:57.729265 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2025-06-03 15:40:57.729272 | orchestrator | 2025-06-03 15:40:57.729279 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2025-06-03 15:40:57.729287 | orchestrator | Tuesday 03 June 2025 15:39:21 +0000 (0:00:01.077) 0:09:40.300 ********** 2025-06-03 15:40:57.729294 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-03 15:40:57.729301 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-06-03 15:40:57.729309 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-06-03 15:40:57.729316 | orchestrator | 2025-06-03 15:40:57.729323 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2025-06-03 15:40:57.729329 | orchestrator | Tuesday 03 June 2025 15:39:23 +0000 (0:00:02.362) 0:09:42.663 ********** 2025-06-03 15:40:57.729336 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-06-03 15:40:57.729342 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-06-03 15:40:57.729349 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:40:57.729356 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-06-03 15:40:57.729362 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-06-03 15:40:57.729377 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:40:57.729384 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-06-03 15:40:57.729389 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-06-03 15:40:57.729393 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:40:57.729398 | orchestrator | 2025-06-03 15:40:57.729402 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2025-06-03 15:40:57.729406 | orchestrator | Tuesday 03 June 2025 15:39:25 +0000 (0:00:01.537) 0:09:44.200 ********** 2025-06-03 15:40:57.729410 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:40:57.729414 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:40:57.729418 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:40:57.729422 | orchestrator | 2025-06-03 15:40:57.729426 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2025-06-03 15:40:57.729430 | orchestrator | Tuesday 03 June 2025 15:39:27 +0000 (0:00:02.667) 0:09:46.868 ********** 2025-06-03 15:40:57.729435 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.729439 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:57.729443 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:57.729447 | orchestrator | 2025-06-03 15:40:57.729451 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2025-06-03 15:40:57.729455 | orchestrator | Tuesday 03 June 2025 15:39:28 +0000 (0:00:00.323) 0:09:47.191 ********** 2025-06-03 15:40:57.729459 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-03 15:40:57.729465 | orchestrator | 2025-06-03 15:40:57.729471 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2025-06-03 15:40:57.729478 | orchestrator | Tuesday 03 June 2025 15:39:29 +0000 (0:00:00.826) 0:09:48.018 ********** 2025-06-03 15:40:57.729485 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-03 15:40:57.729491 | orchestrator | 2025-06-03 15:40:57.729498 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2025-06-03 15:40:57.729504 | orchestrator | Tuesday 03 June 2025 15:39:29 +0000 (0:00:00.536) 0:09:48.554 ********** 2025-06-03 15:40:57.729511 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:40:57.729517 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:40:57.729538 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:40:57.729544 | orchestrator | 2025-06-03 15:40:57.729550 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2025-06-03 15:40:57.729556 | orchestrator | Tuesday 03 June 2025 15:39:30 +0000 (0:00:01.239) 0:09:49.794 ********** 2025-06-03 15:40:57.729561 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:40:57.729567 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:40:57.729573 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:40:57.729579 | orchestrator | 2025-06-03 15:40:57.729585 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2025-06-03 15:40:57.729592 | orchestrator | Tuesday 03 June 2025 15:39:32 +0000 (0:00:01.460) 0:09:51.254 ********** 2025-06-03 15:40:57.729598 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:40:57.729605 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:40:57.729616 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:40:57.729622 | orchestrator | 2025-06-03 15:40:57.729629 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2025-06-03 15:40:57.729634 | orchestrator | Tuesday 03 June 2025 15:39:34 +0000 (0:00:01.956) 0:09:53.211 ********** 2025-06-03 15:40:57.729640 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:40:57.729646 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:40:57.729652 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:40:57.729658 | orchestrator | 2025-06-03 15:40:57.729664 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2025-06-03 15:40:57.729670 | orchestrator | Tuesday 03 June 2025 15:39:36 +0000 (0:00:02.727) 0:09:55.939 ********** 2025-06-03 15:40:57.729676 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:57.729688 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:57.729705 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:57.729712 | orchestrator | 2025-06-03 15:40:57.729717 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-06-03 15:40:57.729723 | orchestrator | Tuesday 03 June 2025 15:39:38 +0000 (0:00:01.281) 0:09:57.220 ********** 2025-06-03 15:40:57.729730 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:40:57.729735 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:40:57.729741 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:40:57.729747 | orchestrator | 2025-06-03 15:40:57.729753 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-06-03 15:40:57.729759 | orchestrator | Tuesday 03 June 2025 15:39:38 +0000 (0:00:00.653) 0:09:57.873 ********** 2025-06-03 15:40:57.729765 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-03 15:40:57.729771 | orchestrator | 2025-06-03 15:40:57.729777 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-06-03 15:40:57.729782 | orchestrator | Tuesday 03 June 2025 15:39:39 +0000 (0:00:00.613) 0:09:58.487 ********** 2025-06-03 15:40:57.729789 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:57.729796 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:57.729801 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:57.729808 | orchestrator | 2025-06-03 15:40:57.729813 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-06-03 15:40:57.729819 | orchestrator | Tuesday 03 June 2025 15:39:39 +0000 (0:00:00.293) 0:09:58.781 ********** 2025-06-03 15:40:57.729825 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:40:57.729833 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:40:57.729839 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:40:57.729846 | orchestrator | 2025-06-03 15:40:57.729852 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-06-03 15:40:57.729859 | orchestrator | Tuesday 03 June 2025 15:39:41 +0000 (0:00:01.244) 0:10:00.025 ********** 2025-06-03 15:40:57.729865 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-03 15:40:57.729871 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-03 15:40:57.729877 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-03 15:40:57.729884 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.729891 | orchestrator | 2025-06-03 15:40:57.729898 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-06-03 15:40:57.729904 | orchestrator | Tuesday 03 June 2025 15:39:41 +0000 (0:00:00.691) 0:10:00.716 ********** 2025-06-03 15:40:57.729910 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:57.729917 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:57.729923 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:57.729929 | orchestrator | 2025-06-03 15:40:57.729935 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-06-03 15:40:57.729941 | orchestrator | 2025-06-03 15:40:57.729947 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-06-03 15:40:57.729953 | orchestrator | Tuesday 03 June 2025 15:39:42 +0000 (0:00:00.618) 0:10:01.335 ********** 2025-06-03 15:40:57.729960 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-03 15:40:57.729967 | orchestrator | 2025-06-03 15:40:57.729973 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-06-03 15:40:57.729980 | orchestrator | Tuesday 03 June 2025 15:39:42 +0000 (0:00:00.430) 0:10:01.765 ********** 2025-06-03 15:40:57.729986 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-03 15:40:57.729993 | orchestrator | 2025-06-03 15:40:57.729999 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-06-03 15:40:57.730007 | orchestrator | Tuesday 03 June 2025 15:39:43 +0000 (0:00:00.632) 0:10:02.397 ********** 2025-06-03 15:40:57.730013 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.730074 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:57.730081 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:57.730088 | orchestrator | 2025-06-03 15:40:57.730096 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-06-03 15:40:57.730102 | orchestrator | Tuesday 03 June 2025 15:39:43 +0000 (0:00:00.322) 0:10:02.719 ********** 2025-06-03 15:40:57.730107 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:57.730113 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:57.730119 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:57.730124 | orchestrator | 2025-06-03 15:40:57.730130 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-06-03 15:40:57.730137 | orchestrator | Tuesday 03 June 2025 15:39:44 +0000 (0:00:00.708) 0:10:03.428 ********** 2025-06-03 15:40:57.730143 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:57.730149 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:57.730155 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:57.730162 | orchestrator | 2025-06-03 15:40:57.730169 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-06-03 15:40:57.730175 | orchestrator | Tuesday 03 June 2025 15:39:45 +0000 (0:00:00.728) 0:10:04.156 ********** 2025-06-03 15:40:57.730182 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:57.730188 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:57.730194 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:57.730200 | orchestrator | 2025-06-03 15:40:57.730211 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-06-03 15:40:57.730217 | orchestrator | Tuesday 03 June 2025 15:39:46 +0000 (0:00:01.044) 0:10:05.201 ********** 2025-06-03 15:40:57.730223 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.730229 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:57.730236 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:57.730242 | orchestrator | 2025-06-03 15:40:57.730247 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-06-03 15:40:57.730254 | orchestrator | Tuesday 03 June 2025 15:39:46 +0000 (0:00:00.306) 0:10:05.507 ********** 2025-06-03 15:40:57.730259 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.730266 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:57.730272 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:57.730278 | orchestrator | 2025-06-03 15:40:57.730296 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-06-03 15:40:57.730303 | orchestrator | Tuesday 03 June 2025 15:39:46 +0000 (0:00:00.317) 0:10:05.824 ********** 2025-06-03 15:40:57.730309 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.730315 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:57.730321 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:57.730328 | orchestrator | 2025-06-03 15:40:57.730334 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-06-03 15:40:57.730340 | orchestrator | Tuesday 03 June 2025 15:39:47 +0000 (0:00:00.306) 0:10:06.131 ********** 2025-06-03 15:40:57.730346 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:57.730352 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:57.730371 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:57.730377 | orchestrator | 2025-06-03 15:40:57.730391 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-06-03 15:40:57.730397 | orchestrator | Tuesday 03 June 2025 15:39:48 +0000 (0:00:01.040) 0:10:07.171 ********** 2025-06-03 15:40:57.730404 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:57.730411 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:57.730416 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:57.730423 | orchestrator | 2025-06-03 15:40:57.730430 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-06-03 15:40:57.730436 | orchestrator | Tuesday 03 June 2025 15:39:48 +0000 (0:00:00.760) 0:10:07.931 ********** 2025-06-03 15:40:57.730443 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.730449 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:57.730465 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:57.730472 | orchestrator | 2025-06-03 15:40:57.730479 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-06-03 15:40:57.730485 | orchestrator | Tuesday 03 June 2025 15:39:49 +0000 (0:00:00.332) 0:10:08.263 ********** 2025-06-03 15:40:57.730492 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.730499 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:57.730505 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:57.730512 | orchestrator | 2025-06-03 15:40:57.730519 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-06-03 15:40:57.730576 | orchestrator | Tuesday 03 June 2025 15:39:49 +0000 (0:00:00.326) 0:10:08.590 ********** 2025-06-03 15:40:57.730583 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:57.730590 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:57.730597 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:57.730603 | orchestrator | 2025-06-03 15:40:57.730609 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-06-03 15:40:57.730615 | orchestrator | Tuesday 03 June 2025 15:39:50 +0000 (0:00:00.610) 0:10:09.201 ********** 2025-06-03 15:40:57.730621 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:57.730627 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:57.730635 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:57.730641 | orchestrator | 2025-06-03 15:40:57.730648 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-06-03 15:40:57.730655 | orchestrator | Tuesday 03 June 2025 15:39:50 +0000 (0:00:00.354) 0:10:09.556 ********** 2025-06-03 15:40:57.730661 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:57.730668 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:57.730675 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:57.730681 | orchestrator | 2025-06-03 15:40:57.730688 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-06-03 15:40:57.730696 | orchestrator | Tuesday 03 June 2025 15:39:50 +0000 (0:00:00.349) 0:10:09.905 ********** 2025-06-03 15:40:57.730703 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.730710 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:57.730717 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:57.730724 | orchestrator | 2025-06-03 15:40:57.730731 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-06-03 15:40:57.730739 | orchestrator | Tuesday 03 June 2025 15:39:51 +0000 (0:00:00.297) 0:10:10.203 ********** 2025-06-03 15:40:57.730746 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.730753 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:57.730759 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:57.730767 | orchestrator | 2025-06-03 15:40:57.730775 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-06-03 15:40:57.730782 | orchestrator | Tuesday 03 June 2025 15:39:51 +0000 (0:00:00.644) 0:10:10.847 ********** 2025-06-03 15:40:57.730790 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.730798 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:57.730805 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:57.730813 | orchestrator | 2025-06-03 15:40:57.730821 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-06-03 15:40:57.730828 | orchestrator | Tuesday 03 June 2025 15:39:52 +0000 (0:00:00.323) 0:10:11.171 ********** 2025-06-03 15:40:57.730836 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:57.730843 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:57.730851 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:57.730859 | orchestrator | 2025-06-03 15:40:57.730866 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-06-03 15:40:57.730874 | orchestrator | Tuesday 03 June 2025 15:39:52 +0000 (0:00:00.325) 0:10:11.496 ********** 2025-06-03 15:40:57.730882 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:57.730889 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:57.730898 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:57.730905 | orchestrator | 2025-06-03 15:40:57.730912 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2025-06-03 15:40:57.730934 | orchestrator | Tuesday 03 June 2025 15:39:53 +0000 (0:00:00.792) 0:10:12.289 ********** 2025-06-03 15:40:57.730941 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-03 15:40:57.730949 | orchestrator | 2025-06-03 15:40:57.730956 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-06-03 15:40:57.730963 | orchestrator | Tuesday 03 June 2025 15:39:53 +0000 (0:00:00.542) 0:10:12.832 ********** 2025-06-03 15:40:57.730970 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-03 15:40:57.730978 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-06-03 15:40:57.730986 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-06-03 15:40:57.730994 | orchestrator | 2025-06-03 15:40:57.731013 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-06-03 15:40:57.731021 | orchestrator | Tuesday 03 June 2025 15:39:56 +0000 (0:00:02.626) 0:10:15.458 ********** 2025-06-03 15:40:57.731028 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-06-03 15:40:57.731036 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-06-03 15:40:57.731043 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:40:57.731051 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-06-03 15:40:57.731059 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-06-03 15:40:57.731067 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:40:57.731075 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-06-03 15:40:57.731082 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-06-03 15:40:57.731090 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:40:57.731098 | orchestrator | 2025-06-03 15:40:57.731105 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2025-06-03 15:40:57.731113 | orchestrator | Tuesday 03 June 2025 15:39:57 +0000 (0:00:01.251) 0:10:16.710 ********** 2025-06-03 15:40:57.731120 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.731127 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:57.731134 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:57.731141 | orchestrator | 2025-06-03 15:40:57.731148 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2025-06-03 15:40:57.731154 | orchestrator | Tuesday 03 June 2025 15:39:58 +0000 (0:00:00.566) 0:10:17.277 ********** 2025-06-03 15:40:57.731161 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-03 15:40:57.731169 | orchestrator | 2025-06-03 15:40:57.731176 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2025-06-03 15:40:57.731183 | orchestrator | Tuesday 03 June 2025 15:39:58 +0000 (0:00:00.529) 0:10:17.807 ********** 2025-06-03 15:40:57.731191 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-06-03 15:40:57.731200 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-06-03 15:40:57.731207 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-06-03 15:40:57.731214 | orchestrator | 2025-06-03 15:40:57.731222 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2025-06-03 15:40:57.731229 | orchestrator | Tuesday 03 June 2025 15:39:59 +0000 (0:00:00.779) 0:10:18.586 ********** 2025-06-03 15:40:57.731236 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-03 15:40:57.731243 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-06-03 15:40:57.731251 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-03 15:40:57.731268 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-03 15:40:57.731275 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-06-03 15:40:57.731282 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-06-03 15:40:57.731289 | orchestrator | 2025-06-03 15:40:57.731296 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-06-03 15:40:57.731303 | orchestrator | Tuesday 03 June 2025 15:40:05 +0000 (0:00:06.067) 0:10:24.653 ********** 2025-06-03 15:40:57.731309 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-03 15:40:57.731316 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-06-03 15:40:57.731323 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-03 15:40:57.731330 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2025-06-03 15:40:57.731337 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-03 15:40:57.731345 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-06-03 15:40:57.731352 | orchestrator | 2025-06-03 15:40:57.731359 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-06-03 15:40:57.731366 | orchestrator | Tuesday 03 June 2025 15:40:07 +0000 (0:00:02.335) 0:10:26.989 ********** 2025-06-03 15:40:57.731373 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-06-03 15:40:57.731380 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:40:57.731388 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-06-03 15:40:57.731400 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:40:57.731407 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-06-03 15:40:57.731413 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:40:57.731420 | orchestrator | 2025-06-03 15:40:57.731426 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2025-06-03 15:40:57.731433 | orchestrator | Tuesday 03 June 2025 15:40:09 +0000 (0:00:01.212) 0:10:28.201 ********** 2025-06-03 15:40:57.731439 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2025-06-03 15:40:57.731446 | orchestrator | 2025-06-03 15:40:57.731452 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2025-06-03 15:40:57.731467 | orchestrator | Tuesday 03 June 2025 15:40:09 +0000 (0:00:00.251) 0:10:28.453 ********** 2025-06-03 15:40:57.731475 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-03 15:40:57.731481 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-03 15:40:57.731487 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-03 15:40:57.731493 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-03 15:40:57.731499 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-03 15:40:57.731506 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.731512 | orchestrator | 2025-06-03 15:40:57.731519 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2025-06-03 15:40:57.731546 | orchestrator | Tuesday 03 June 2025 15:40:10 +0000 (0:00:00.911) 0:10:29.364 ********** 2025-06-03 15:40:57.731553 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-03 15:40:57.731559 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-03 15:40:57.731571 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-03 15:40:57.731577 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-03 15:40:57.731583 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-03 15:40:57.731590 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.731596 | orchestrator | 2025-06-03 15:40:57.731602 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2025-06-03 15:40:57.731609 | orchestrator | Tuesday 03 June 2025 15:40:11 +0000 (0:00:01.197) 0:10:30.562 ********** 2025-06-03 15:40:57.731615 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-06-03 15:40:57.731622 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-06-03 15:40:57.731629 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-06-03 15:40:57.731636 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-06-03 15:40:57.731643 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-06-03 15:40:57.731649 | orchestrator | 2025-06-03 15:40:57.731655 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2025-06-03 15:40:57.731662 | orchestrator | Tuesday 03 June 2025 15:40:43 +0000 (0:00:31.849) 0:11:02.411 ********** 2025-06-03 15:40:57.731669 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.731675 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:57.731682 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:57.731688 | orchestrator | 2025-06-03 15:40:57.731695 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2025-06-03 15:40:57.731702 | orchestrator | Tuesday 03 June 2025 15:40:43 +0000 (0:00:00.344) 0:11:02.756 ********** 2025-06-03 15:40:57.731708 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.731715 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:57.731722 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:57.731729 | orchestrator | 2025-06-03 15:40:57.731735 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2025-06-03 15:40:57.731742 | orchestrator | Tuesday 03 June 2025 15:40:44 +0000 (0:00:00.309) 0:11:03.065 ********** 2025-06-03 15:40:57.731749 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-03 15:40:57.731756 | orchestrator | 2025-06-03 15:40:57.731762 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2025-06-03 15:40:57.731775 | orchestrator | Tuesday 03 June 2025 15:40:44 +0000 (0:00:00.767) 0:11:03.833 ********** 2025-06-03 15:40:57.731782 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-03 15:40:57.731789 | orchestrator | 2025-06-03 15:40:57.731796 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2025-06-03 15:40:57.731802 | orchestrator | Tuesday 03 June 2025 15:40:45 +0000 (0:00:00.567) 0:11:04.401 ********** 2025-06-03 15:40:57.731808 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:40:57.731815 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:40:57.731821 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:40:57.731827 | orchestrator | 2025-06-03 15:40:57.731840 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2025-06-03 15:40:57.731852 | orchestrator | Tuesday 03 June 2025 15:40:46 +0000 (0:00:01.206) 0:11:05.608 ********** 2025-06-03 15:40:57.731858 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:40:57.731865 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:40:57.731871 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:40:57.731877 | orchestrator | 2025-06-03 15:40:57.731884 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2025-06-03 15:40:57.731890 | orchestrator | Tuesday 03 June 2025 15:40:47 +0000 (0:00:01.331) 0:11:06.940 ********** 2025-06-03 15:40:57.731896 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:40:57.731903 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:40:57.731909 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:40:57.731915 | orchestrator | 2025-06-03 15:40:57.731922 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2025-06-03 15:40:57.731928 | orchestrator | Tuesday 03 June 2025 15:40:49 +0000 (0:00:01.809) 0:11:08.749 ********** 2025-06-03 15:40:57.731934 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-06-03 15:40:57.731939 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-06-03 15:40:57.731945 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-06-03 15:40:57.731951 | orchestrator | 2025-06-03 15:40:57.731956 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-06-03 15:40:57.731961 | orchestrator | Tuesday 03 June 2025 15:40:52 +0000 (0:00:02.673) 0:11:11.423 ********** 2025-06-03 15:40:57.731967 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.731973 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:57.731980 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:57.731986 | orchestrator | 2025-06-03 15:40:57.731992 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-06-03 15:40:57.731998 | orchestrator | Tuesday 03 June 2025 15:40:52 +0000 (0:00:00.368) 0:11:11.791 ********** 2025-06-03 15:40:57.732005 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-03 15:40:57.732011 | orchestrator | 2025-06-03 15:40:57.732017 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-06-03 15:40:57.732023 | orchestrator | Tuesday 03 June 2025 15:40:53 +0000 (0:00:00.557) 0:11:12.349 ********** 2025-06-03 15:40:57.732029 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:57.732035 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:57.732042 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:57.732048 | orchestrator | 2025-06-03 15:40:57.732055 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-06-03 15:40:57.732061 | orchestrator | Tuesday 03 June 2025 15:40:53 +0000 (0:00:00.577) 0:11:12.926 ********** 2025-06-03 15:40:57.732068 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.732074 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:40:57.732081 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:40:57.732088 | orchestrator | 2025-06-03 15:40:57.732093 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-06-03 15:40:57.732099 | orchestrator | Tuesday 03 June 2025 15:40:54 +0000 (0:00:00.406) 0:11:13.332 ********** 2025-06-03 15:40:57.732106 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-03 15:40:57.732113 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-03 15:40:57.732119 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-03 15:40:57.732125 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:40:57.732131 | orchestrator | 2025-06-03 15:40:57.732137 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-06-03 15:40:57.732144 | orchestrator | Tuesday 03 June 2025 15:40:54 +0000 (0:00:00.580) 0:11:13.913 ********** 2025-06-03 15:40:57.732155 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:40:57.732160 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:40:57.732166 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:40:57.732172 | orchestrator | 2025-06-03 15:40:57.732177 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-03 15:40:57.732183 | orchestrator | testbed-node-0 : ok=141  changed=36  unreachable=0 failed=0 skipped=135  rescued=0 ignored=0 2025-06-03 15:40:57.732190 | orchestrator | testbed-node-1 : ok=127  changed=32  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2025-06-03 15:40:57.732197 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2025-06-03 15:40:57.732204 | orchestrator | testbed-node-3 : ok=186  changed=44  unreachable=0 failed=0 skipped=152  rescued=0 ignored=0 2025-06-03 15:40:57.732215 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2025-06-03 15:40:57.732222 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2025-06-03 15:40:57.732227 | orchestrator | 2025-06-03 15:40:57.732234 | orchestrator | 2025-06-03 15:40:57.732240 | orchestrator | 2025-06-03 15:40:57.732247 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-03 15:40:57.732253 | orchestrator | Tuesday 03 June 2025 15:40:55 +0000 (0:00:00.249) 0:11:14.162 ********** 2025-06-03 15:40:57.732266 | orchestrator | =============================================================================== 2025-06-03 15:40:57.732273 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 77.68s 2025-06-03 15:40:57.732279 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 43.59s 2025-06-03 15:40:57.732286 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 36.30s 2025-06-03 15:40:57.732293 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 31.85s 2025-06-03 15:40:57.732299 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 15.72s 2025-06-03 15:40:57.732306 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 13.24s 2025-06-03 15:40:57.732313 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node -------------------- 11.90s 2025-06-03 15:40:57.732319 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 9.87s 2025-06-03 15:40:57.732326 | orchestrator | ceph-mon : Fetch ceph initial keys -------------------------------------- 9.05s 2025-06-03 15:40:57.732331 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.73s 2025-06-03 15:40:57.732337 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 6.61s 2025-06-03 15:40:57.732344 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 6.07s 2025-06-03 15:40:57.732350 | orchestrator | ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created --- 5.00s 2025-06-03 15:40:57.732357 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 5.00s 2025-06-03 15:40:57.732364 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 4.13s 2025-06-03 15:40:57.732370 | orchestrator | ceph-mds : Create ceph filesystem --------------------------------------- 3.78s 2025-06-03 15:40:57.732377 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 3.66s 2025-06-03 15:40:57.732384 | orchestrator | ceph-crash : Start the ceph-crash service ------------------------------- 3.51s 2025-06-03 15:40:57.732391 | orchestrator | ceph-mon : Copy admin keyring over to mons ------------------------------ 3.50s 2025-06-03 15:40:57.732398 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 3.07s 2025-06-03 15:40:57.732411 | orchestrator | 2025-06-03 15:40:57 | INFO  | Task d975f909-c71f-4dcc-a54f-d1176b7bd747 is in state STARTED 2025-06-03 15:40:57.732418 | orchestrator | 2025-06-03 15:40:57 | INFO  | Task d484ed7a-4dc2-4560-958c-f7c55614b831 is in state STARTED 2025-06-03 15:40:57.732425 | orchestrator | 2025-06-03 15:40:57 | INFO  | Task 59c2b23c-d503-413f-a93a-341903632085 is in state STARTED 2025-06-03 15:40:57.732432 | orchestrator | 2025-06-03 15:40:57 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:41:00.752033 | orchestrator | 2025-06-03 15:41:00 | INFO  | Task d975f909-c71f-4dcc-a54f-d1176b7bd747 is in state STARTED 2025-06-03 15:41:00.753795 | orchestrator | 2025-06-03 15:41:00 | INFO  | Task d484ed7a-4dc2-4560-958c-f7c55614b831 is in state STARTED 2025-06-03 15:41:00.756150 | orchestrator | 2025-06-03 15:41:00 | INFO  | Task 59c2b23c-d503-413f-a93a-341903632085 is in state STARTED 2025-06-03 15:41:00.756415 | orchestrator | 2025-06-03 15:41:00 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:41:03.795469 | orchestrator | 2025-06-03 15:41:03 | INFO  | Task d975f909-c71f-4dcc-a54f-d1176b7bd747 is in state STARTED 2025-06-03 15:41:03.797251 | orchestrator | 2025-06-03 15:41:03 | INFO  | Task d484ed7a-4dc2-4560-958c-f7c55614b831 is in state STARTED 2025-06-03 15:41:03.799995 | orchestrator | 2025-06-03 15:41:03 | INFO  | Task 59c2b23c-d503-413f-a93a-341903632085 is in state STARTED 2025-06-03 15:41:03.800083 | orchestrator | 2025-06-03 15:41:03 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:41:06.850106 | orchestrator | 2025-06-03 15:41:06 | INFO  | Task d975f909-c71f-4dcc-a54f-d1176b7bd747 is in state STARTED 2025-06-03 15:41:06.851866 | orchestrator | 2025-06-03 15:41:06 | INFO  | Task d484ed7a-4dc2-4560-958c-f7c55614b831 is in state STARTED 2025-06-03 15:41:06.854342 | orchestrator | 2025-06-03 15:41:06 | INFO  | Task 59c2b23c-d503-413f-a93a-341903632085 is in state STARTED 2025-06-03 15:41:06.854383 | orchestrator | 2025-06-03 15:41:06 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:41:09.901611 | orchestrator | 2025-06-03 15:41:09 | INFO  | Task d975f909-c71f-4dcc-a54f-d1176b7bd747 is in state STARTED 2025-06-03 15:41:09.903414 | orchestrator | 2025-06-03 15:41:09 | INFO  | Task d484ed7a-4dc2-4560-958c-f7c55614b831 is in state STARTED 2025-06-03 15:41:09.905115 | orchestrator | 2025-06-03 15:41:09 | INFO  | Task 59c2b23c-d503-413f-a93a-341903632085 is in state STARTED 2025-06-03 15:41:09.905161 | orchestrator | 2025-06-03 15:41:09 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:41:12.952729 | orchestrator | 2025-06-03 15:41:12 | INFO  | Task d975f909-c71f-4dcc-a54f-d1176b7bd747 is in state STARTED 2025-06-03 15:41:12.953241 | orchestrator | 2025-06-03 15:41:12 | INFO  | Task d484ed7a-4dc2-4560-958c-f7c55614b831 is in state STARTED 2025-06-03 15:41:12.954287 | orchestrator | 2025-06-03 15:41:12 | INFO  | Task 59c2b23c-d503-413f-a93a-341903632085 is in state STARTED 2025-06-03 15:41:12.954334 | orchestrator | 2025-06-03 15:41:12 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:41:16.014780 | orchestrator | 2025-06-03 15:41:16 | INFO  | Task d975f909-c71f-4dcc-a54f-d1176b7bd747 is in state STARTED 2025-06-03 15:41:16.014856 | orchestrator | 2025-06-03 15:41:16 | INFO  | Task d484ed7a-4dc2-4560-958c-f7c55614b831 is in state STARTED 2025-06-03 15:41:16.015854 | orchestrator | 2025-06-03 15:41:16 | INFO  | Task 59c2b23c-d503-413f-a93a-341903632085 is in state STARTED 2025-06-03 15:41:16.015900 | orchestrator | 2025-06-03 15:41:16 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:41:19.066590 | orchestrator | 2025-06-03 15:41:19 | INFO  | Task d975f909-c71f-4dcc-a54f-d1176b7bd747 is in state STARTED 2025-06-03 15:41:19.068287 | orchestrator | 2025-06-03 15:41:19 | INFO  | Task d484ed7a-4dc2-4560-958c-f7c55614b831 is in state STARTED 2025-06-03 15:41:19.069993 | orchestrator | 2025-06-03 15:41:19 | INFO  | Task 59c2b23c-d503-413f-a93a-341903632085 is in state STARTED 2025-06-03 15:41:19.071220 | orchestrator | 2025-06-03 15:41:19 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:41:22.117856 | orchestrator | 2025-06-03 15:41:22 | INFO  | Task d975f909-c71f-4dcc-a54f-d1176b7bd747 is in state STARTED 2025-06-03 15:41:22.119986 | orchestrator | 2025-06-03 15:41:22 | INFO  | Task d484ed7a-4dc2-4560-958c-f7c55614b831 is in state STARTED 2025-06-03 15:41:22.121897 | orchestrator | 2025-06-03 15:41:22 | INFO  | Task 59c2b23c-d503-413f-a93a-341903632085 is in state STARTED 2025-06-03 15:41:22.122286 | orchestrator | 2025-06-03 15:41:22 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:41:25.161639 | orchestrator | 2025-06-03 15:41:25 | INFO  | Task d975f909-c71f-4dcc-a54f-d1176b7bd747 is in state STARTED 2025-06-03 15:41:25.163857 | orchestrator | 2025-06-03 15:41:25 | INFO  | Task d484ed7a-4dc2-4560-958c-f7c55614b831 is in state STARTED 2025-06-03 15:41:25.166208 | orchestrator | 2025-06-03 15:41:25 | INFO  | Task 59c2b23c-d503-413f-a93a-341903632085 is in state STARTED 2025-06-03 15:41:25.166855 | orchestrator | 2025-06-03 15:41:25 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:41:28.208317 | orchestrator | 2025-06-03 15:41:28 | INFO  | Task d975f909-c71f-4dcc-a54f-d1176b7bd747 is in state STARTED 2025-06-03 15:41:28.210628 | orchestrator | 2025-06-03 15:41:28 | INFO  | Task d484ed7a-4dc2-4560-958c-f7c55614b831 is in state STARTED 2025-06-03 15:41:28.212221 | orchestrator | 2025-06-03 15:41:28 | INFO  | Task 59c2b23c-d503-413f-a93a-341903632085 is in state STARTED 2025-06-03 15:41:28.212562 | orchestrator | 2025-06-03 15:41:28 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:41:31.260683 | orchestrator | 2025-06-03 15:41:31 | INFO  | Task d975f909-c71f-4dcc-a54f-d1176b7bd747 is in state STARTED 2025-06-03 15:41:31.262009 | orchestrator | 2025-06-03 15:41:31 | INFO  | Task d484ed7a-4dc2-4560-958c-f7c55614b831 is in state STARTED 2025-06-03 15:41:31.262600 | orchestrator | 2025-06-03 15:41:31 | INFO  | Task 59c2b23c-d503-413f-a93a-341903632085 is in state STARTED 2025-06-03 15:41:31.262636 | orchestrator | 2025-06-03 15:41:31 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:41:34.318116 | orchestrator | 2025-06-03 15:41:34 | INFO  | Task d975f909-c71f-4dcc-a54f-d1176b7bd747 is in state STARTED 2025-06-03 15:41:34.319951 | orchestrator | 2025-06-03 15:41:34 | INFO  | Task d484ed7a-4dc2-4560-958c-f7c55614b831 is in state STARTED 2025-06-03 15:41:34.322485 | orchestrator | 2025-06-03 15:41:34 | INFO  | Task 59c2b23c-d503-413f-a93a-341903632085 is in state STARTED 2025-06-03 15:41:34.322714 | orchestrator | 2025-06-03 15:41:34 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:41:37.373389 | orchestrator | 2025-06-03 15:41:37 | INFO  | Task d975f909-c71f-4dcc-a54f-d1176b7bd747 is in state STARTED 2025-06-03 15:41:37.374299 | orchestrator | 2025-06-03 15:41:37 | INFO  | Task d484ed7a-4dc2-4560-958c-f7c55614b831 is in state STARTED 2025-06-03 15:41:37.376540 | orchestrator | 2025-06-03 15:41:37 | INFO  | Task 59c2b23c-d503-413f-a93a-341903632085 is in state STARTED 2025-06-03 15:41:37.376625 | orchestrator | 2025-06-03 15:41:37 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:41:40.420433 | orchestrator | 2025-06-03 15:41:40 | INFO  | Task d975f909-c71f-4dcc-a54f-d1176b7bd747 is in state STARTED 2025-06-03 15:41:40.420735 | orchestrator | 2025-06-03 15:41:40 | INFO  | Task d484ed7a-4dc2-4560-958c-f7c55614b831 is in state SUCCESS 2025-06-03 15:41:40.422753 | orchestrator | 2025-06-03 15:41:40.422811 | orchestrator | 2025-06-03 15:41:40.422820 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-03 15:41:40.422828 | orchestrator | 2025-06-03 15:41:40.422837 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-03 15:41:40.422844 | orchestrator | Tuesday 03 June 2025 15:38:31 +0000 (0:00:00.234) 0:00:00.234 ********** 2025-06-03 15:41:40.422850 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:41:40.422857 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:41:40.422863 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:41:40.422869 | orchestrator | 2025-06-03 15:41:40.422875 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-03 15:41:40.422881 | orchestrator | Tuesday 03 June 2025 15:38:31 +0000 (0:00:00.250) 0:00:00.485 ********** 2025-06-03 15:41:40.422887 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2025-06-03 15:41:40.422894 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2025-06-03 15:41:40.422899 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2025-06-03 15:41:40.422907 | orchestrator | 2025-06-03 15:41:40.422917 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2025-06-03 15:41:40.422926 | orchestrator | 2025-06-03 15:41:40.422936 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-06-03 15:41:40.422946 | orchestrator | Tuesday 03 June 2025 15:38:32 +0000 (0:00:00.339) 0:00:00.825 ********** 2025-06-03 15:41:40.422956 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:41:40.422966 | orchestrator | 2025-06-03 15:41:40.422976 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2025-06-03 15:41:40.422986 | orchestrator | Tuesday 03 June 2025 15:38:32 +0000 (0:00:00.452) 0:00:01.277 ********** 2025-06-03 15:41:40.422995 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-06-03 15:41:40.423006 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-06-03 15:41:40.423017 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-06-03 15:41:40.423026 | orchestrator | 2025-06-03 15:41:40.423036 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2025-06-03 15:41:40.423042 | orchestrator | Tuesday 03 June 2025 15:38:33 +0000 (0:00:00.808) 0:00:02.086 ********** 2025-06-03 15:41:40.423052 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-03 15:41:40.423063 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-03 15:41:40.423108 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-03 15:41:40.423117 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-03 15:41:40.423126 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-03 15:41:40.423133 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-03 15:41:40.423144 | orchestrator | 2025-06-03 15:41:40.423150 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-06-03 15:41:40.423156 | orchestrator | Tuesday 03 June 2025 15:38:35 +0000 (0:00:01.655) 0:00:03.741 ********** 2025-06-03 15:41:40.423165 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:41:40.423171 | orchestrator | 2025-06-03 15:41:40.423177 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2025-06-03 15:41:40.423183 | orchestrator | Tuesday 03 June 2025 15:38:35 +0000 (0:00:00.536) 0:00:04.278 ********** 2025-06-03 15:41:40.423197 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-03 15:41:40.423204 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-03 15:41:40.423210 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-03 15:41:40.423217 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-03 15:41:40.423235 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-03 15:41:40.423242 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-03 15:41:40.423249 | orchestrator | 2025-06-03 15:41:40.423255 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2025-06-03 15:41:40.423261 | orchestrator | Tuesday 03 June 2025 15:38:38 +0000 (0:00:02.835) 0:00:07.113 ********** 2025-06-03 15:41:40.423267 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-03 15:41:40.423273 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-03 15:41:40.423284 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:41:40.423294 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-03 15:41:40.423305 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-03 15:41:40.423312 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:41:40.423318 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-03 15:41:40.423324 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-03 15:41:40.423336 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:41:40.423342 | orchestrator | 2025-06-03 15:41:40.423348 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2025-06-03 15:41:40.423354 | orchestrator | Tuesday 03 June 2025 15:38:39 +0000 (0:00:01.263) 0:00:08.376 ********** 2025-06-03 15:41:40.423363 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-03 15:41:40.423373 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-03 15:41:40.423380 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:41:40.423386 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-03 15:41:40.423393 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-03 15:41:40.423404 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:41:40.423413 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-03 15:41:40.423425 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-03 15:41:40.423432 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:41:40.423438 | orchestrator | 2025-06-03 15:41:40.423444 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2025-06-03 15:41:40.423450 | orchestrator | Tuesday 03 June 2025 15:38:40 +0000 (0:00:01.093) 0:00:09.470 ********** 2025-06-03 15:41:40.423456 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-03 15:41:40.423466 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-03 15:41:40.423472 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-03 15:41:40.423511 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-03 15:41:40.423520 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-03 15:41:40.423527 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-03 15:41:40.423537 | orchestrator | 2025-06-03 15:41:40.423543 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2025-06-03 15:41:40.423549 | orchestrator | Tuesday 03 June 2025 15:38:43 +0000 (0:00:02.593) 0:00:12.064 ********** 2025-06-03 15:41:40.423555 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:41:40.423561 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:41:40.423567 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:41:40.423573 | orchestrator | 2025-06-03 15:41:40.423579 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2025-06-03 15:41:40.423585 | orchestrator | Tuesday 03 June 2025 15:38:46 +0000 (0:00:02.876) 0:00:14.941 ********** 2025-06-03 15:41:40.423591 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:41:40.423596 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:41:40.423602 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:41:40.423608 | orchestrator | 2025-06-03 15:41:40.423614 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2025-06-03 15:41:40.423619 | orchestrator | Tuesday 03 June 2025 15:38:48 +0000 (0:00:01.809) 0:00:16.750 ********** 2025-06-03 15:41:40.423629 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-03 15:41:40.423640 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-03 15:41:40.423646 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-03 15:41:40.423657 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-03 15:41:40.423666 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-03 15:41:40.423677 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-03 15:41:40.423683 | orchestrator | 2025-06-03 15:41:40.423689 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-06-03 15:41:40.423695 | orchestrator | Tuesday 03 June 2025 15:38:50 +0000 (0:00:01.956) 0:00:18.707 ********** 2025-06-03 15:41:40.423701 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:41:40.423711 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:41:40.423717 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:41:40.423723 | orchestrator | 2025-06-03 15:41:40.423728 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-06-03 15:41:40.423734 | orchestrator | Tuesday 03 June 2025 15:38:50 +0000 (0:00:00.280) 0:00:18.987 ********** 2025-06-03 15:41:40.423740 | orchestrator | 2025-06-03 15:41:40.423746 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-06-03 15:41:40.423751 | orchestrator | Tuesday 03 June 2025 15:38:50 +0000 (0:00:00.062) 0:00:19.050 ********** 2025-06-03 15:41:40.423757 | orchestrator | 2025-06-03 15:41:40.423763 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-06-03 15:41:40.423769 | orchestrator | Tuesday 03 June 2025 15:38:50 +0000 (0:00:00.060) 0:00:19.110 ********** 2025-06-03 15:41:40.423775 | orchestrator | 2025-06-03 15:41:40.423780 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2025-06-03 15:41:40.423786 | orchestrator | Tuesday 03 June 2025 15:38:50 +0000 (0:00:00.183) 0:00:19.293 ********** 2025-06-03 15:41:40.423792 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:41:40.423798 | orchestrator | 2025-06-03 15:41:40.423804 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2025-06-03 15:41:40.423809 | orchestrator | Tuesday 03 June 2025 15:38:50 +0000 (0:00:00.180) 0:00:19.473 ********** 2025-06-03 15:41:40.423815 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:41:40.423821 | orchestrator | 2025-06-03 15:41:40.423827 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2025-06-03 15:41:40.423833 | orchestrator | Tuesday 03 June 2025 15:38:50 +0000 (0:00:00.205) 0:00:19.678 ********** 2025-06-03 15:41:40.423838 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:41:40.423844 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:41:40.423850 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:41:40.423856 | orchestrator | 2025-06-03 15:41:40.423861 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2025-06-03 15:41:40.423867 | orchestrator | Tuesday 03 June 2025 15:40:10 +0000 (0:01:19.052) 0:01:38.731 ********** 2025-06-03 15:41:40.423873 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:41:40.423879 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:41:40.423885 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:41:40.423890 | orchestrator | 2025-06-03 15:41:40.423896 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-06-03 15:41:40.423902 | orchestrator | Tuesday 03 June 2025 15:41:26 +0000 (0:01:16.462) 0:02:55.194 ********** 2025-06-03 15:41:40.423911 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:41:40.423921 | orchestrator | 2025-06-03 15:41:40.423931 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2025-06-03 15:41:40.423940 | orchestrator | Tuesday 03 June 2025 15:41:27 +0000 (0:00:00.680) 0:02:55.875 ********** 2025-06-03 15:41:40.423949 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:41:40.423960 | orchestrator | 2025-06-03 15:41:40.423971 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2025-06-03 15:41:40.423980 | orchestrator | Tuesday 03 June 2025 15:41:29 +0000 (0:00:02.392) 0:02:58.267 ********** 2025-06-03 15:41:40.423989 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:41:40.423995 | orchestrator | 2025-06-03 15:41:40.424001 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2025-06-03 15:41:40.424006 | orchestrator | Tuesday 03 June 2025 15:41:31 +0000 (0:00:02.259) 0:03:00.526 ********** 2025-06-03 15:41:40.424012 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:41:40.424018 | orchestrator | 2025-06-03 15:41:40.424024 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2025-06-03 15:41:40.424029 | orchestrator | Tuesday 03 June 2025 15:41:34 +0000 (0:00:02.879) 0:03:03.405 ********** 2025-06-03 15:41:40.424035 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:41:40.424048 | orchestrator | 2025-06-03 15:41:40.424058 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-03 15:41:40.424064 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-03 15:41:40.424071 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-03 15:41:40.424077 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-03 15:41:40.424083 | orchestrator | 2025-06-03 15:41:40.424089 | orchestrator | 2025-06-03 15:41:40.424095 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-03 15:41:40.424105 | orchestrator | Tuesday 03 June 2025 15:41:37 +0000 (0:00:02.387) 0:03:05.793 ********** 2025-06-03 15:41:40.424111 | orchestrator | =============================================================================== 2025-06-03 15:41:40.424117 | orchestrator | opensearch : Restart opensearch container ------------------------------ 79.05s 2025-06-03 15:41:40.424123 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 76.46s 2025-06-03 15:41:40.424128 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.88s 2025-06-03 15:41:40.424134 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 2.88s 2025-06-03 15:41:40.424140 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.84s 2025-06-03 15:41:40.424146 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.59s 2025-06-03 15:41:40.424151 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.39s 2025-06-03 15:41:40.424157 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.39s 2025-06-03 15:41:40.424163 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.26s 2025-06-03 15:41:40.424168 | orchestrator | opensearch : Check opensearch containers -------------------------------- 1.96s 2025-06-03 15:41:40.424174 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 1.81s 2025-06-03 15:41:40.424180 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.66s 2025-06-03 15:41:40.424186 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 1.26s 2025-06-03 15:41:40.424191 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 1.09s 2025-06-03 15:41:40.424197 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.81s 2025-06-03 15:41:40.424203 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.68s 2025-06-03 15:41:40.424208 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.54s 2025-06-03 15:41:40.424214 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.45s 2025-06-03 15:41:40.424220 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.34s 2025-06-03 15:41:40.424225 | orchestrator | opensearch : Flush handlers --------------------------------------------- 0.31s 2025-06-03 15:41:40.424231 | orchestrator | 2025-06-03 15:41:40 | INFO  | Task 59c2b23c-d503-413f-a93a-341903632085 is in state STARTED 2025-06-03 15:41:40.424237 | orchestrator | 2025-06-03 15:41:40 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:41:43.466293 | orchestrator | 2025-06-03 15:41:43.466389 | orchestrator | 2025-06-03 15:41:43.466406 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2025-06-03 15:41:43.466420 | orchestrator | 2025-06-03 15:41:43.466434 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-06-03 15:41:43.466446 | orchestrator | Tuesday 03 June 2025 15:38:31 +0000 (0:00:00.089) 0:00:00.089 ********** 2025-06-03 15:41:43.466460 | orchestrator | ok: [localhost] => { 2025-06-03 15:41:43.466476 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2025-06-03 15:41:43.466810 | orchestrator | } 2025-06-03 15:41:43.466829 | orchestrator | 2025-06-03 15:41:43.466838 | orchestrator | TASK [Check MariaDB service] *************************************************** 2025-06-03 15:41:43.466846 | orchestrator | Tuesday 03 June 2025 15:38:31 +0000 (0:00:00.041) 0:00:00.130 ********** 2025-06-03 15:41:43.466854 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2025-06-03 15:41:43.466864 | orchestrator | ...ignoring 2025-06-03 15:41:43.466873 | orchestrator | 2025-06-03 15:41:43.466881 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2025-06-03 15:41:43.466889 | orchestrator | Tuesday 03 June 2025 15:38:34 +0000 (0:00:02.743) 0:00:02.874 ********** 2025-06-03 15:41:43.466897 | orchestrator | skipping: [localhost] 2025-06-03 15:41:43.466905 | orchestrator | 2025-06-03 15:41:43.466913 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2025-06-03 15:41:43.466920 | orchestrator | Tuesday 03 June 2025 15:38:34 +0000 (0:00:00.074) 0:00:02.949 ********** 2025-06-03 15:41:43.466928 | orchestrator | ok: [localhost] 2025-06-03 15:41:43.466936 | orchestrator | 2025-06-03 15:41:43.466944 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-03 15:41:43.466952 | orchestrator | 2025-06-03 15:41:43.466960 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-03 15:41:43.466968 | orchestrator | Tuesday 03 June 2025 15:38:34 +0000 (0:00:00.167) 0:00:03.116 ********** 2025-06-03 15:41:43.466975 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:41:43.466983 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:41:43.466991 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:41:43.466999 | orchestrator | 2025-06-03 15:41:43.467020 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-03 15:41:43.467028 | orchestrator | Tuesday 03 June 2025 15:38:34 +0000 (0:00:00.316) 0:00:03.433 ********** 2025-06-03 15:41:43.467037 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-06-03 15:41:43.467045 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-06-03 15:41:43.467053 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-06-03 15:41:43.467061 | orchestrator | 2025-06-03 15:41:43.467069 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-06-03 15:41:43.467076 | orchestrator | 2025-06-03 15:41:43.467084 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-06-03 15:41:43.467092 | orchestrator | Tuesday 03 June 2025 15:38:35 +0000 (0:00:00.522) 0:00:03.956 ********** 2025-06-03 15:41:43.467100 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-06-03 15:41:43.467108 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-06-03 15:41:43.467117 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-06-03 15:41:43.467125 | orchestrator | 2025-06-03 15:41:43.467133 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-06-03 15:41:43.467140 | orchestrator | Tuesday 03 June 2025 15:38:35 +0000 (0:00:00.367) 0:00:04.323 ********** 2025-06-03 15:41:43.467148 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:41:43.467157 | orchestrator | 2025-06-03 15:41:43.467165 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2025-06-03 15:41:43.467173 | orchestrator | Tuesday 03 June 2025 15:38:36 +0000 (0:00:00.660) 0:00:04.983 ********** 2025-06-03 15:41:43.467203 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-03 15:41:43.467230 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-03 15:41:43.467241 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-03 15:41:43.467256 | orchestrator | 2025-06-03 15:41:43.467272 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2025-06-03 15:41:43.467280 | orchestrator | Tuesday 03 June 2025 15:38:39 +0000 (0:00:03.100) 0:00:08.084 ********** 2025-06-03 15:41:43.467289 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:41:43.467298 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:41:43.467306 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:41:43.467314 | orchestrator | 2025-06-03 15:41:43.467322 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2025-06-03 15:41:43.467330 | orchestrator | Tuesday 03 June 2025 15:38:40 +0000 (0:00:00.698) 0:00:08.782 ********** 2025-06-03 15:41:43.467338 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:41:43.467345 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:41:43.467353 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:41:43.467361 | orchestrator | 2025-06-03 15:41:43.467369 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2025-06-03 15:41:43.467377 | orchestrator | Tuesday 03 June 2025 15:38:41 +0000 (0:00:01.755) 0:00:10.538 ********** 2025-06-03 15:41:43.467396 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-03 15:41:43.467411 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-03 15:41:43.467430 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-03 15:41:43.467440 | orchestrator | 2025-06-03 15:41:43.467448 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2025-06-03 15:41:43.467456 | orchestrator | Tuesday 03 June 2025 15:38:45 +0000 (0:00:03.674) 0:00:14.212 ********** 2025-06-03 15:41:43.467464 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:41:43.467472 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:41:43.467480 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:41:43.467512 | orchestrator | 2025-06-03 15:41:43.467527 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2025-06-03 15:41:43.467544 | orchestrator | Tuesday 03 June 2025 15:38:46 +0000 (0:00:01.081) 0:00:15.294 ********** 2025-06-03 15:41:43.467552 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:41:43.467560 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:41:43.467567 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:41:43.467575 | orchestrator | 2025-06-03 15:41:43.467583 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-06-03 15:41:43.467590 | orchestrator | Tuesday 03 June 2025 15:38:50 +0000 (0:00:04.144) 0:00:19.438 ********** 2025-06-03 15:41:43.467598 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:41:43.467606 | orchestrator | 2025-06-03 15:41:43.467614 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-06-03 15:41:43.467622 | orchestrator | Tuesday 03 June 2025 15:38:51 +0000 (0:00:00.488) 0:00:19.926 ********** 2025-06-03 15:41:43.467638 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-03 15:41:43.467648 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:41:43.467661 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-03 15:41:43.467676 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:41:43.467691 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-03 15:41:43.467700 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:41:43.467708 | orchestrator | 2025-06-03 15:41:43.467716 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-06-03 15:41:43.467724 | orchestrator | Tuesday 03 June 2025 15:38:54 +0000 (0:00:03.284) 0:00:23.211 ********** 2025-06-03 15:41:43.467736 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-03 15:41:43.467750 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:41:43.467763 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-03 15:41:43.467772 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:41:43.467784 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-03 15:41:43.467798 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:41:43.467806 | orchestrator | 2025-06-03 15:41:43.467814 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-06-03 15:41:43.467822 | orchestrator | Tuesday 03 June 2025 15:38:56 +0000 (0:00:02.345) 0:00:25.556 ********** 2025-06-03 15:41:43.467830 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-03 15:41:43.467839 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:41:43.467858 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-03 15:41:43.467872 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:41:43.467881 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-03 15:41:43.467889 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:41:43.467897 | orchestrator | 2025-06-03 15:41:43.467905 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2025-06-03 15:41:43.467913 | orchestrator | Tuesday 03 June 2025 15:38:59 +0000 (0:00:02.966) 0:00:28.522 ********** 2025-06-03 15:41:43.467926 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/l2025-06-03 15:41:43 | INFO  | Task d975f909-c71f-4dcc-a54f-d1176b7bd747 is in state SUCCESS 2025-06-03 15:41:43.467941 | orchestrator | ocaltime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-03 15:41:43.467955 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-03 15:41:43.467972 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-03 15:41:43.467985 | orchestrator | 2025-06-03 15:41:43.467993 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2025-06-03 15:41:43.468001 | orchestrator | Tuesday 03 June 2025 15:39:03 +0000 (0:00:04.181) 0:00:32.704 ********** 2025-06-03 15:41:43.468009 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:41:43.468017 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:41:43.468025 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:41:43.468033 | orchestrator | 2025-06-03 15:41:43.468047 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2025-06-03 15:41:43.468055 | orchestrator | Tuesday 03 June 2025 15:39:05 +0000 (0:00:01.090) 0:00:33.795 ********** 2025-06-03 15:41:43.468063 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:41:43.468071 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:41:43.468079 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:41:43.468087 | orchestrator | 2025-06-03 15:41:43.468095 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2025-06-03 15:41:43.468102 | orchestrator | Tuesday 03 June 2025 15:39:05 +0000 (0:00:00.374) 0:00:34.169 ********** 2025-06-03 15:41:43.468110 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:41:43.468118 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:41:43.468126 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:41:43.468134 | orchestrator | 2025-06-03 15:41:43.468141 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2025-06-03 15:41:43.468149 | orchestrator | Tuesday 03 June 2025 15:39:05 +0000 (0:00:00.328) 0:00:34.497 ********** 2025-06-03 15:41:43.468158 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2025-06-03 15:41:43.468166 | orchestrator | ...ignoring 2025-06-03 15:41:43.468174 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2025-06-03 15:41:43.468182 | orchestrator | ...ignoring 2025-06-03 15:41:43.468190 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2025-06-03 15:41:43.468198 | orchestrator | ...ignoring 2025-06-03 15:41:43.468206 | orchestrator | 2025-06-03 15:41:43.468214 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2025-06-03 15:41:43.468221 | orchestrator | Tuesday 03 June 2025 15:39:16 +0000 (0:00:10.978) 0:00:45.475 ********** 2025-06-03 15:41:43.468229 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:41:43.468237 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:41:43.468245 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:41:43.468253 | orchestrator | 2025-06-03 15:41:43.468261 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2025-06-03 15:41:43.468268 | orchestrator | Tuesday 03 June 2025 15:39:17 +0000 (0:00:00.650) 0:00:46.126 ********** 2025-06-03 15:41:43.468276 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:41:43.468284 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:41:43.468292 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:41:43.468300 | orchestrator | 2025-06-03 15:41:43.468307 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2025-06-03 15:41:43.468315 | orchestrator | Tuesday 03 June 2025 15:39:17 +0000 (0:00:00.522) 0:00:46.649 ********** 2025-06-03 15:41:43.468323 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:41:43.468331 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:41:43.468339 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:41:43.468347 | orchestrator | 2025-06-03 15:41:43.468354 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2025-06-03 15:41:43.468362 | orchestrator | Tuesday 03 June 2025 15:39:18 +0000 (0:00:00.527) 0:00:47.177 ********** 2025-06-03 15:41:43.468370 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:41:43.468378 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:41:43.468391 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:41:43.468398 | orchestrator | 2025-06-03 15:41:43.468406 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2025-06-03 15:41:43.468414 | orchestrator | Tuesday 03 June 2025 15:39:18 +0000 (0:00:00.446) 0:00:47.623 ********** 2025-06-03 15:41:43.468422 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:41:43.468430 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:41:43.468438 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:41:43.468446 | orchestrator | 2025-06-03 15:41:43.468458 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2025-06-03 15:41:43.468466 | orchestrator | Tuesday 03 June 2025 15:39:19 +0000 (0:00:00.839) 0:00:48.463 ********** 2025-06-03 15:41:43.468474 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:41:43.468482 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:41:43.468520 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:41:43.468529 | orchestrator | 2025-06-03 15:41:43.468537 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-06-03 15:41:43.468545 | orchestrator | Tuesday 03 June 2025 15:39:20 +0000 (0:00:00.492) 0:00:48.956 ********** 2025-06-03 15:41:43.468553 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:41:43.468561 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:41:43.468569 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2025-06-03 15:41:43.468577 | orchestrator | 2025-06-03 15:41:43.468585 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2025-06-03 15:41:43.468593 | orchestrator | Tuesday 03 June 2025 15:39:20 +0000 (0:00:00.409) 0:00:49.365 ********** 2025-06-03 15:41:43.468600 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:41:43.468608 | orchestrator | 2025-06-03 15:41:43.468616 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2025-06-03 15:41:43.468624 | orchestrator | Tuesday 03 June 2025 15:39:31 +0000 (0:00:10.885) 0:01:00.250 ********** 2025-06-03 15:41:43.468632 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:41:43.468640 | orchestrator | 2025-06-03 15:41:43.468647 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-06-03 15:41:43.468655 | orchestrator | Tuesday 03 June 2025 15:39:31 +0000 (0:00:00.120) 0:01:00.370 ********** 2025-06-03 15:41:43.468663 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:41:43.468671 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:41:43.468679 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:41:43.468686 | orchestrator | 2025-06-03 15:41:43.468694 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2025-06-03 15:41:43.468702 | orchestrator | Tuesday 03 June 2025 15:39:32 +0000 (0:00:01.032) 0:01:01.403 ********** 2025-06-03 15:41:43.468710 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:41:43.468718 | orchestrator | 2025-06-03 15:41:43.468726 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2025-06-03 15:41:43.468738 | orchestrator | Tuesday 03 June 2025 15:39:39 +0000 (0:00:07.369) 0:01:08.773 ********** 2025-06-03 15:41:43.468746 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:41:43.468754 | orchestrator | 2025-06-03 15:41:43.468762 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2025-06-03 15:41:43.468770 | orchestrator | Tuesday 03 June 2025 15:39:42 +0000 (0:00:02.546) 0:01:11.320 ********** 2025-06-03 15:41:43.468778 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:41:43.468785 | orchestrator | 2025-06-03 15:41:43.468793 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2025-06-03 15:41:43.468802 | orchestrator | Tuesday 03 June 2025 15:39:44 +0000 (0:00:02.438) 0:01:13.759 ********** 2025-06-03 15:41:43.468809 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:41:43.468817 | orchestrator | 2025-06-03 15:41:43.468825 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2025-06-03 15:41:43.468833 | orchestrator | Tuesday 03 June 2025 15:39:45 +0000 (0:00:00.134) 0:01:13.893 ********** 2025-06-03 15:41:43.468841 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:41:43.468854 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:41:43.468862 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:41:43.468870 | orchestrator | 2025-06-03 15:41:43.468878 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2025-06-03 15:41:43.468886 | orchestrator | Tuesday 03 June 2025 15:39:45 +0000 (0:00:00.587) 0:01:14.480 ********** 2025-06-03 15:41:43.468894 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:41:43.468902 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-06-03 15:41:43.468910 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:41:43.468918 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:41:43.468925 | orchestrator | 2025-06-03 15:41:43.468933 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-06-03 15:41:43.468941 | orchestrator | skipping: no hosts matched 2025-06-03 15:41:43.468949 | orchestrator | 2025-06-03 15:41:43.468957 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-06-03 15:41:43.468965 | orchestrator | 2025-06-03 15:41:43.468973 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-06-03 15:41:43.468981 | orchestrator | Tuesday 03 June 2025 15:39:46 +0000 (0:00:00.318) 0:01:14.799 ********** 2025-06-03 15:41:43.468989 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:41:43.468997 | orchestrator | 2025-06-03 15:41:43.469004 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-06-03 15:41:43.469012 | orchestrator | Tuesday 03 June 2025 15:40:06 +0000 (0:00:20.469) 0:01:35.268 ********** 2025-06-03 15:41:43.469020 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:41:43.469028 | orchestrator | 2025-06-03 15:41:43.469036 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-06-03 15:41:43.469044 | orchestrator | Tuesday 03 June 2025 15:40:27 +0000 (0:00:20.589) 0:01:55.857 ********** 2025-06-03 15:41:43.469052 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:41:43.469059 | orchestrator | 2025-06-03 15:41:43.469067 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-06-03 15:41:43.469075 | orchestrator | 2025-06-03 15:41:43.469083 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-06-03 15:41:43.469091 | orchestrator | Tuesday 03 June 2025 15:40:29 +0000 (0:00:02.428) 0:01:58.286 ********** 2025-06-03 15:41:43.469099 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:41:43.469107 | orchestrator | 2025-06-03 15:41:43.469115 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-06-03 15:41:43.469123 | orchestrator | Tuesday 03 June 2025 15:40:53 +0000 (0:00:23.675) 0:02:21.961 ********** 2025-06-03 15:41:43.469131 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:41:43.469139 | orchestrator | 2025-06-03 15:41:43.469146 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-06-03 15:41:43.469154 | orchestrator | Tuesday 03 June 2025 15:41:08 +0000 (0:00:15.566) 0:02:37.528 ********** 2025-06-03 15:41:43.469167 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:41:43.469175 | orchestrator | 2025-06-03 15:41:43.469183 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-06-03 15:41:43.469191 | orchestrator | 2025-06-03 15:41:43.469199 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-06-03 15:41:43.469207 | orchestrator | Tuesday 03 June 2025 15:41:11 +0000 (0:00:02.681) 0:02:40.209 ********** 2025-06-03 15:41:43.469215 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:41:43.469222 | orchestrator | 2025-06-03 15:41:43.469230 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-06-03 15:41:43.469238 | orchestrator | Tuesday 03 June 2025 15:41:22 +0000 (0:00:11.520) 0:02:51.730 ********** 2025-06-03 15:41:43.469246 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:41:43.469254 | orchestrator | 2025-06-03 15:41:43.469262 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-06-03 15:41:43.469270 | orchestrator | Tuesday 03 June 2025 15:41:27 +0000 (0:00:04.594) 0:02:56.325 ********** 2025-06-03 15:41:43.469283 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:41:43.469291 | orchestrator | 2025-06-03 15:41:43.469299 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-06-03 15:41:43.469307 | orchestrator | 2025-06-03 15:41:43.469315 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-06-03 15:41:43.469322 | orchestrator | Tuesday 03 June 2025 15:41:29 +0000 (0:00:02.409) 0:02:58.734 ********** 2025-06-03 15:41:43.469330 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:41:43.469338 | orchestrator | 2025-06-03 15:41:43.469346 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2025-06-03 15:41:43.469354 | orchestrator | Tuesday 03 June 2025 15:41:30 +0000 (0:00:00.533) 0:02:59.268 ********** 2025-06-03 15:41:43.469362 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:41:43.469370 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:41:43.469378 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:41:43.469385 | orchestrator | 2025-06-03 15:41:43.469393 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2025-06-03 15:41:43.469401 | orchestrator | Tuesday 03 June 2025 15:41:32 +0000 (0:00:02.452) 0:03:01.721 ********** 2025-06-03 15:41:43.469409 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:41:43.469421 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:41:43.469429 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:41:43.469437 | orchestrator | 2025-06-03 15:41:43.469445 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2025-06-03 15:41:43.469453 | orchestrator | Tuesday 03 June 2025 15:41:35 +0000 (0:00:02.157) 0:03:03.879 ********** 2025-06-03 15:41:43.469461 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:41:43.469468 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:41:43.469476 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:41:43.469484 | orchestrator | 2025-06-03 15:41:43.469547 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2025-06-03 15:41:43.469556 | orchestrator | Tuesday 03 June 2025 15:41:37 +0000 (0:00:02.336) 0:03:06.215 ********** 2025-06-03 15:41:43.469563 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:41:43.469571 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:41:43.469579 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:41:43.469587 | orchestrator | 2025-06-03 15:41:43.469595 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2025-06-03 15:41:43.469603 | orchestrator | Tuesday 03 June 2025 15:41:39 +0000 (0:00:02.081) 0:03:08.296 ********** 2025-06-03 15:41:43.469611 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:41:43.469619 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:41:43.469627 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:41:43.469635 | orchestrator | 2025-06-03 15:41:43.469642 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-06-03 15:41:43.469650 | orchestrator | Tuesday 03 June 2025 15:41:42 +0000 (0:00:02.838) 0:03:11.135 ********** 2025-06-03 15:41:43.469658 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:41:43.469666 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:41:43.469674 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:41:43.469681 | orchestrator | 2025-06-03 15:41:43.469689 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-03 15:41:43.469697 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-06-03 15:41:43.469706 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2025-06-03 15:41:43.469715 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-06-03 15:41:43.469723 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-06-03 15:41:43.469737 | orchestrator | 2025-06-03 15:41:43.469745 | orchestrator | 2025-06-03 15:41:43.469752 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-03 15:41:43.469760 | orchestrator | Tuesday 03 June 2025 15:41:42 +0000 (0:00:00.226) 0:03:11.361 ********** 2025-06-03 15:41:43.469768 | orchestrator | =============================================================================== 2025-06-03 15:41:43.469776 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 44.14s 2025-06-03 15:41:43.469784 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 36.16s 2025-06-03 15:41:43.469792 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 11.52s 2025-06-03 15:41:43.469800 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.98s 2025-06-03 15:41:43.469808 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 10.89s 2025-06-03 15:41:43.469821 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 7.37s 2025-06-03 15:41:43.469829 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 5.11s 2025-06-03 15:41:43.469837 | orchestrator | mariadb : Wait for MariaDB service port liveness ------------------------ 4.59s 2025-06-03 15:41:43.469844 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 4.18s 2025-06-03 15:41:43.469852 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 4.14s 2025-06-03 15:41:43.469860 | orchestrator | mariadb : Copying over config.json files for services ------------------- 3.67s 2025-06-03 15:41:43.469868 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 3.28s 2025-06-03 15:41:43.469875 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 3.10s 2025-06-03 15:41:43.469883 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 2.97s 2025-06-03 15:41:43.469891 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 2.84s 2025-06-03 15:41:43.469899 | orchestrator | Check MariaDB service --------------------------------------------------- 2.74s 2025-06-03 15:41:43.469907 | orchestrator | mariadb : Wait for first MariaDB service port liveness ------------------ 2.55s 2025-06-03 15:41:43.469915 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.45s 2025-06-03 15:41:43.469922 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.44s 2025-06-03 15:41:43.469930 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.41s 2025-06-03 15:41:43.469938 | orchestrator | 2025-06-03 15:41:43 | INFO  | Task 59c2b23c-d503-413f-a93a-341903632085 is in state STARTED 2025-06-03 15:41:43.469947 | orchestrator | 2025-06-03 15:41:43 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:41:46.515315 | orchestrator | 2025-06-03 15:41:46 | INFO  | Task d8ab598c-d92d-47d3-97dc-f1b022ff522b is in state STARTED 2025-06-03 15:41:46.517912 | orchestrator | 2025-06-03 15:41:46 | INFO  | Task 59c2b23c-d503-413f-a93a-341903632085 is in state STARTED 2025-06-03 15:41:46.519930 | orchestrator | 2025-06-03 15:41:46 | INFO  | Task 370c44b3-02c1-45b9-a587-25a6f93fa861 is in state STARTED 2025-06-03 15:41:46.520794 | orchestrator | 2025-06-03 15:41:46 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:41:49.560546 | orchestrator | 2025-06-03 15:41:49 | INFO  | Task d8ab598c-d92d-47d3-97dc-f1b022ff522b is in state STARTED 2025-06-03 15:41:49.562202 | orchestrator | 2025-06-03 15:41:49 | INFO  | Task 59c2b23c-d503-413f-a93a-341903632085 is in state STARTED 2025-06-03 15:41:49.564348 | orchestrator | 2025-06-03 15:41:49 | INFO  | Task 370c44b3-02c1-45b9-a587-25a6f93fa861 is in state STARTED 2025-06-03 15:41:49.564374 | orchestrator | 2025-06-03 15:41:49 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:41:52.598546 | orchestrator | 2025-06-03 15:41:52 | INFO  | Task d8ab598c-d92d-47d3-97dc-f1b022ff522b is in state STARTED 2025-06-03 15:41:52.598650 | orchestrator | 2025-06-03 15:41:52 | INFO  | Task 59c2b23c-d503-413f-a93a-341903632085 is in state STARTED 2025-06-03 15:41:52.601826 | orchestrator | 2025-06-03 15:41:52 | INFO  | Task 370c44b3-02c1-45b9-a587-25a6f93fa861 is in state STARTED 2025-06-03 15:41:52.601891 | orchestrator | 2025-06-03 15:41:52 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:41:55.649344 | orchestrator | 2025-06-03 15:41:55 | INFO  | Task d8ab598c-d92d-47d3-97dc-f1b022ff522b is in state STARTED 2025-06-03 15:41:55.649708 | orchestrator | 2025-06-03 15:41:55 | INFO  | Task 59c2b23c-d503-413f-a93a-341903632085 is in state STARTED 2025-06-03 15:41:55.653671 | orchestrator | 2025-06-03 15:41:55 | INFO  | Task 370c44b3-02c1-45b9-a587-25a6f93fa861 is in state STARTED 2025-06-03 15:41:55.653740 | orchestrator | 2025-06-03 15:41:55 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:41:58.695442 | orchestrator | 2025-06-03 15:41:58 | INFO  | Task d8ab598c-d92d-47d3-97dc-f1b022ff522b is in state STARTED 2025-06-03 15:41:58.695565 | orchestrator | 2025-06-03 15:41:58 | INFO  | Task 59c2b23c-d503-413f-a93a-341903632085 is in state STARTED 2025-06-03 15:41:58.696405 | orchestrator | 2025-06-03 15:41:58 | INFO  | Task 370c44b3-02c1-45b9-a587-25a6f93fa861 is in state STARTED 2025-06-03 15:41:58.696442 | orchestrator | 2025-06-03 15:41:58 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:42:01.733257 | orchestrator | 2025-06-03 15:42:01 | INFO  | Task d8ab598c-d92d-47d3-97dc-f1b022ff522b is in state STARTED 2025-06-03 15:42:01.733588 | orchestrator | 2025-06-03 15:42:01 | INFO  | Task 59c2b23c-d503-413f-a93a-341903632085 is in state STARTED 2025-06-03 15:42:01.733640 | orchestrator | 2025-06-03 15:42:01 | INFO  | Task 370c44b3-02c1-45b9-a587-25a6f93fa861 is in state STARTED 2025-06-03 15:42:01.733772 | orchestrator | 2025-06-03 15:42:01 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:42:04.777995 | orchestrator | 2025-06-03 15:42:04 | INFO  | Task d8ab598c-d92d-47d3-97dc-f1b022ff522b is in state STARTED 2025-06-03 15:42:04.778658 | orchestrator | 2025-06-03 15:42:04 | INFO  | Task 59c2b23c-d503-413f-a93a-341903632085 is in state STARTED 2025-06-03 15:42:04.781304 | orchestrator | 2025-06-03 15:42:04 | INFO  | Task 370c44b3-02c1-45b9-a587-25a6f93fa861 is in state STARTED 2025-06-03 15:42:04.782705 | orchestrator | 2025-06-03 15:42:04 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:42:07.826109 | orchestrator | 2025-06-03 15:42:07 | INFO  | Task d8ab598c-d92d-47d3-97dc-f1b022ff522b is in state STARTED 2025-06-03 15:42:07.826559 | orchestrator | 2025-06-03 15:42:07 | INFO  | Task 59c2b23c-d503-413f-a93a-341903632085 is in state STARTED 2025-06-03 15:42:07.828637 | orchestrator | 2025-06-03 15:42:07 | INFO  | Task 370c44b3-02c1-45b9-a587-25a6f93fa861 is in state STARTED 2025-06-03 15:42:07.828673 | orchestrator | 2025-06-03 15:42:07 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:42:10.877820 | orchestrator | 2025-06-03 15:42:10 | INFO  | Task d8ab598c-d92d-47d3-97dc-f1b022ff522b is in state STARTED 2025-06-03 15:42:10.880904 | orchestrator | 2025-06-03 15:42:10 | INFO  | Task 59c2b23c-d503-413f-a93a-341903632085 is in state STARTED 2025-06-03 15:42:10.882861 | orchestrator | 2025-06-03 15:42:10 | INFO  | Task 370c44b3-02c1-45b9-a587-25a6f93fa861 is in state STARTED 2025-06-03 15:42:10.884283 | orchestrator | 2025-06-03 15:42:10 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:42:13.930821 | orchestrator | 2025-06-03 15:42:13 | INFO  | Task d8ab598c-d92d-47d3-97dc-f1b022ff522b is in state STARTED 2025-06-03 15:42:13.931912 | orchestrator | 2025-06-03 15:42:13 | INFO  | Task 59c2b23c-d503-413f-a93a-341903632085 is in state STARTED 2025-06-03 15:42:13.933688 | orchestrator | 2025-06-03 15:42:13 | INFO  | Task 370c44b3-02c1-45b9-a587-25a6f93fa861 is in state STARTED 2025-06-03 15:42:13.933730 | orchestrator | 2025-06-03 15:42:13 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:42:16.984535 | orchestrator | 2025-06-03 15:42:16 | INFO  | Task d8ab598c-d92d-47d3-97dc-f1b022ff522b is in state STARTED 2025-06-03 15:42:16.985796 | orchestrator | 2025-06-03 15:42:16 | INFO  | Task 59c2b23c-d503-413f-a93a-341903632085 is in state STARTED 2025-06-03 15:42:16.988306 | orchestrator | 2025-06-03 15:42:16 | INFO  | Task 370c44b3-02c1-45b9-a587-25a6f93fa861 is in state STARTED 2025-06-03 15:42:16.988607 | orchestrator | 2025-06-03 15:42:16 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:42:20.037152 | orchestrator | 2025-06-03 15:42:20 | INFO  | Task d8ab598c-d92d-47d3-97dc-f1b022ff522b is in state STARTED 2025-06-03 15:42:20.039704 | orchestrator | 2025-06-03 15:42:20 | INFO  | Task 59c2b23c-d503-413f-a93a-341903632085 is in state STARTED 2025-06-03 15:42:20.040977 | orchestrator | 2025-06-03 15:42:20 | INFO  | Task 370c44b3-02c1-45b9-a587-25a6f93fa861 is in state STARTED 2025-06-03 15:42:20.041027 | orchestrator | 2025-06-03 15:42:20 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:42:23.093661 | orchestrator | 2025-06-03 15:42:23 | INFO  | Task d8ab598c-d92d-47d3-97dc-f1b022ff522b is in state STARTED 2025-06-03 15:42:23.094829 | orchestrator | 2025-06-03 15:42:23 | INFO  | Task 59c2b23c-d503-413f-a93a-341903632085 is in state STARTED 2025-06-03 15:42:23.097087 | orchestrator | 2025-06-03 15:42:23 | INFO  | Task 370c44b3-02c1-45b9-a587-25a6f93fa861 is in state STARTED 2025-06-03 15:42:23.097804 | orchestrator | 2025-06-03 15:42:23 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:42:26.141439 | orchestrator | 2025-06-03 15:42:26 | INFO  | Task d8ab598c-d92d-47d3-97dc-f1b022ff522b is in state STARTED 2025-06-03 15:42:26.141588 | orchestrator | 2025-06-03 15:42:26 | INFO  | Task 59c2b23c-d503-413f-a93a-341903632085 is in state STARTED 2025-06-03 15:42:26.143227 | orchestrator | 2025-06-03 15:42:26 | INFO  | Task 370c44b3-02c1-45b9-a587-25a6f93fa861 is in state STARTED 2025-06-03 15:42:26.143261 | orchestrator | 2025-06-03 15:42:26 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:42:29.187971 | orchestrator | 2025-06-03 15:42:29 | INFO  | Task d8ab598c-d92d-47d3-97dc-f1b022ff522b is in state STARTED 2025-06-03 15:42:29.189307 | orchestrator | 2025-06-03 15:42:29 | INFO  | Task 59c2b23c-d503-413f-a93a-341903632085 is in state STARTED 2025-06-03 15:42:29.190699 | orchestrator | 2025-06-03 15:42:29 | INFO  | Task 370c44b3-02c1-45b9-a587-25a6f93fa861 is in state STARTED 2025-06-03 15:42:29.190716 | orchestrator | 2025-06-03 15:42:29 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:42:32.228838 | orchestrator | 2025-06-03 15:42:32 | INFO  | Task d8ab598c-d92d-47d3-97dc-f1b022ff522b is in state STARTED 2025-06-03 15:42:32.230561 | orchestrator | 2025-06-03 15:42:32 | INFO  | Task 59c2b23c-d503-413f-a93a-341903632085 is in state STARTED 2025-06-03 15:42:32.231406 | orchestrator | 2025-06-03 15:42:32 | INFO  | Task 370c44b3-02c1-45b9-a587-25a6f93fa861 is in state STARTED 2025-06-03 15:42:32.231597 | orchestrator | 2025-06-03 15:42:32 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:42:35.281110 | orchestrator | 2025-06-03 15:42:35 | INFO  | Task d8ab598c-d92d-47d3-97dc-f1b022ff522b is in state STARTED 2025-06-03 15:42:35.281202 | orchestrator | 2025-06-03 15:42:35 | INFO  | Task 59c2b23c-d503-413f-a93a-341903632085 is in state STARTED 2025-06-03 15:42:35.281393 | orchestrator | 2025-06-03 15:42:35 | INFO  | Task 370c44b3-02c1-45b9-a587-25a6f93fa861 is in state STARTED 2025-06-03 15:42:35.281409 | orchestrator | 2025-06-03 15:42:35 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:42:38.322721 | orchestrator | 2025-06-03 15:42:38 | INFO  | Task d8ab598c-d92d-47d3-97dc-f1b022ff522b is in state STARTED 2025-06-03 15:42:38.322937 | orchestrator | 2025-06-03 15:42:38 | INFO  | Task 59c2b23c-d503-413f-a93a-341903632085 is in state STARTED 2025-06-03 15:42:38.324431 | orchestrator | 2025-06-03 15:42:38 | INFO  | Task 370c44b3-02c1-45b9-a587-25a6f93fa861 is in state STARTED 2025-06-03 15:42:38.324477 | orchestrator | 2025-06-03 15:42:38 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:42:41.359538 | orchestrator | 2025-06-03 15:42:41 | INFO  | Task d8ab598c-d92d-47d3-97dc-f1b022ff522b is in state STARTED 2025-06-03 15:42:41.360090 | orchestrator | 2025-06-03 15:42:41 | INFO  | Task 59c2b23c-d503-413f-a93a-341903632085 is in state STARTED 2025-06-03 15:42:41.361414 | orchestrator | 2025-06-03 15:42:41 | INFO  | Task 370c44b3-02c1-45b9-a587-25a6f93fa861 is in state STARTED 2025-06-03 15:42:41.361505 | orchestrator | 2025-06-03 15:42:41 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:42:44.403437 | orchestrator | 2025-06-03 15:42:44 | INFO  | Task d8ab598c-d92d-47d3-97dc-f1b022ff522b is in state STARTED 2025-06-03 15:42:44.404772 | orchestrator | 2025-06-03 15:42:44 | INFO  | Task 59c2b23c-d503-413f-a93a-341903632085 is in state STARTED 2025-06-03 15:42:44.406389 | orchestrator | 2025-06-03 15:42:44 | INFO  | Task 370c44b3-02c1-45b9-a587-25a6f93fa861 is in state STARTED 2025-06-03 15:42:44.406470 | orchestrator | 2025-06-03 15:42:44 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:42:47.462894 | orchestrator | 2025-06-03 15:42:47 | INFO  | Task d8ab598c-d92d-47d3-97dc-f1b022ff522b is in state STARTED 2025-06-03 15:42:47.463774 | orchestrator | 2025-06-03 15:42:47 | INFO  | Task 59c2b23c-d503-413f-a93a-341903632085 is in state STARTED 2025-06-03 15:42:47.464939 | orchestrator | 2025-06-03 15:42:47 | INFO  | Task 370c44b3-02c1-45b9-a587-25a6f93fa861 is in state STARTED 2025-06-03 15:42:47.464964 | orchestrator | 2025-06-03 15:42:47 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:42:50.510269 | orchestrator | 2025-06-03 15:42:50 | INFO  | Task d8ab598c-d92d-47d3-97dc-f1b022ff522b is in state STARTED 2025-06-03 15:42:50.511462 | orchestrator | 2025-06-03 15:42:50 | INFO  | Task 59c2b23c-d503-413f-a93a-341903632085 is in state STARTED 2025-06-03 15:42:50.514073 | orchestrator | 2025-06-03 15:42:50 | INFO  | Task 370c44b3-02c1-45b9-a587-25a6f93fa861 is in state STARTED 2025-06-03 15:42:50.514191 | orchestrator | 2025-06-03 15:42:50 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:42:53.558992 | orchestrator | 2025-06-03 15:42:53 | INFO  | Task d8ab598c-d92d-47d3-97dc-f1b022ff522b is in state STARTED 2025-06-03 15:42:53.560785 | orchestrator | 2025-06-03 15:42:53 | INFO  | Task 59c2b23c-d503-413f-a93a-341903632085 is in state STARTED 2025-06-03 15:42:53.562883 | orchestrator | 2025-06-03 15:42:53 | INFO  | Task 370c44b3-02c1-45b9-a587-25a6f93fa861 is in state STARTED 2025-06-03 15:42:53.562932 | orchestrator | 2025-06-03 15:42:53 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:42:56.600511 | orchestrator | 2025-06-03 15:42:56 | INFO  | Task d8ab598c-d92d-47d3-97dc-f1b022ff522b is in state STARTED 2025-06-03 15:42:56.602408 | orchestrator | 2025-06-03 15:42:56 | INFO  | Task 59c2b23c-d503-413f-a93a-341903632085 is in state STARTED 2025-06-03 15:42:56.604001 | orchestrator | 2025-06-03 15:42:56 | INFO  | Task 370c44b3-02c1-45b9-a587-25a6f93fa861 is in state STARTED 2025-06-03 15:42:56.604061 | orchestrator | 2025-06-03 15:42:56 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:42:59.649936 | orchestrator | 2025-06-03 15:42:59 | INFO  | Task d8ab598c-d92d-47d3-97dc-f1b022ff522b is in state STARTED 2025-06-03 15:42:59.650541 | orchestrator | 2025-06-03 15:42:59 | INFO  | Task 59c2b23c-d503-413f-a93a-341903632085 is in state STARTED 2025-06-03 15:42:59.652727 | orchestrator | 2025-06-03 15:42:59 | INFO  | Task 370c44b3-02c1-45b9-a587-25a6f93fa861 is in state STARTED 2025-06-03 15:42:59.652800 | orchestrator | 2025-06-03 15:42:59 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:43:02.712522 | orchestrator | 2025-06-03 15:43:02 | INFO  | Task d8ab598c-d92d-47d3-97dc-f1b022ff522b is in state STARTED 2025-06-03 15:43:02.715048 | orchestrator | 2025-06-03 15:43:02 | INFO  | Task 59c2b23c-d503-413f-a93a-341903632085 is in state STARTED 2025-06-03 15:43:02.717889 | orchestrator | 2025-06-03 15:43:02 | INFO  | Task 370c44b3-02c1-45b9-a587-25a6f93fa861 is in state STARTED 2025-06-03 15:43:02.717947 | orchestrator | 2025-06-03 15:43:02 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:43:05.769236 | orchestrator | 2025-06-03 15:43:05 | INFO  | Task d8ab598c-d92d-47d3-97dc-f1b022ff522b is in state STARTED 2025-06-03 15:43:05.771178 | orchestrator | 2025-06-03 15:43:05 | INFO  | Task 59c2b23c-d503-413f-a93a-341903632085 is in state STARTED 2025-06-03 15:43:05.773202 | orchestrator | 2025-06-03 15:43:05 | INFO  | Task 370c44b3-02c1-45b9-a587-25a6f93fa861 is in state STARTED 2025-06-03 15:43:05.773250 | orchestrator | 2025-06-03 15:43:05 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:43:08.814076 | orchestrator | 2025-06-03 15:43:08 | INFO  | Task d8ab598c-d92d-47d3-97dc-f1b022ff522b is in state STARTED 2025-06-03 15:43:08.814282 | orchestrator | 2025-06-03 15:43:08 | INFO  | Task 59c2b23c-d503-413f-a93a-341903632085 is in state STARTED 2025-06-03 15:43:08.815597 | orchestrator | 2025-06-03 15:43:08 | INFO  | Task 370c44b3-02c1-45b9-a587-25a6f93fa861 is in state STARTED 2025-06-03 15:43:08.815652 | orchestrator | 2025-06-03 15:43:08 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:43:11.863935 | orchestrator | 2025-06-03 15:43:11 | INFO  | Task d8ab598c-d92d-47d3-97dc-f1b022ff522b is in state STARTED 2025-06-03 15:43:11.867884 | orchestrator | 2025-06-03 15:43:11 | INFO  | Task 59c2b23c-d503-413f-a93a-341903632085 is in state SUCCESS 2025-06-03 15:43:11.869834 | orchestrator | 2025-06-03 15:43:11.869874 | orchestrator | 2025-06-03 15:43:11.869879 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2025-06-03 15:43:11.869886 | orchestrator | 2025-06-03 15:43:11.869891 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-06-03 15:43:11.869895 | orchestrator | Tuesday 03 June 2025 15:40:59 +0000 (0:00:00.569) 0:00:00.569 ********** 2025-06-03 15:43:11.869900 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-03 15:43:11.869905 | orchestrator | 2025-06-03 15:43:11.869909 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-06-03 15:43:11.869914 | orchestrator | Tuesday 03 June 2025 15:41:00 +0000 (0:00:00.641) 0:00:01.210 ********** 2025-06-03 15:43:11.869918 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:43:11.869924 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:43:11.869928 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:43:11.869932 | orchestrator | 2025-06-03 15:43:11.869936 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-06-03 15:43:11.869940 | orchestrator | Tuesday 03 June 2025 15:41:01 +0000 (0:00:00.631) 0:00:01.842 ********** 2025-06-03 15:43:11.869963 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:43:11.869968 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:43:11.869972 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:43:11.869975 | orchestrator | 2025-06-03 15:43:11.869980 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-06-03 15:43:11.869984 | orchestrator | Tuesday 03 June 2025 15:41:01 +0000 (0:00:00.295) 0:00:02.138 ********** 2025-06-03 15:43:11.869988 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:43:11.869992 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:43:11.869996 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:43:11.869999 | orchestrator | 2025-06-03 15:43:11.870004 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-06-03 15:43:11.870008 | orchestrator | Tuesday 03 June 2025 15:41:02 +0000 (0:00:00.797) 0:00:02.935 ********** 2025-06-03 15:43:11.870043 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:43:11.870048 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:43:11.870052 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:43:11.870056 | orchestrator | 2025-06-03 15:43:11.870060 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-06-03 15:43:11.870065 | orchestrator | Tuesday 03 June 2025 15:41:02 +0000 (0:00:00.311) 0:00:03.247 ********** 2025-06-03 15:43:11.870069 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:43:11.870072 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:43:11.870076 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:43:11.870080 | orchestrator | 2025-06-03 15:43:11.870084 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-06-03 15:43:11.870088 | orchestrator | Tuesday 03 June 2025 15:41:02 +0000 (0:00:00.298) 0:00:03.546 ********** 2025-06-03 15:43:11.870092 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:43:11.870096 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:43:11.870100 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:43:11.870104 | orchestrator | 2025-06-03 15:43:11.870108 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-06-03 15:43:11.870112 | orchestrator | Tuesday 03 June 2025 15:41:03 +0000 (0:00:00.326) 0:00:03.872 ********** 2025-06-03 15:43:11.870116 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:43:11.870121 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:43:11.870125 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:43:11.870128 | orchestrator | 2025-06-03 15:43:11.870132 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-06-03 15:43:11.870136 | orchestrator | Tuesday 03 June 2025 15:41:03 +0000 (0:00:00.601) 0:00:04.473 ********** 2025-06-03 15:43:11.870140 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:43:11.870144 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:43:11.870148 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:43:11.870152 | orchestrator | 2025-06-03 15:43:11.870156 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-06-03 15:43:11.870159 | orchestrator | Tuesday 03 June 2025 15:41:04 +0000 (0:00:00.302) 0:00:04.776 ********** 2025-06-03 15:43:11.870164 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-06-03 15:43:11.870168 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-03 15:43:11.870171 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-03 15:43:11.870176 | orchestrator | 2025-06-03 15:43:11.870180 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-06-03 15:43:11.870183 | orchestrator | Tuesday 03 June 2025 15:41:04 +0000 (0:00:00.632) 0:00:05.408 ********** 2025-06-03 15:43:11.870187 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:43:11.870191 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:43:11.870195 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:43:11.870199 | orchestrator | 2025-06-03 15:43:11.870203 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-06-03 15:43:11.870207 | orchestrator | Tuesday 03 June 2025 15:41:05 +0000 (0:00:00.432) 0:00:05.840 ********** 2025-06-03 15:43:11.870215 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-06-03 15:43:11.870219 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-03 15:43:11.870233 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-03 15:43:11.870238 | orchestrator | 2025-06-03 15:43:11.870241 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-06-03 15:43:11.870245 | orchestrator | Tuesday 03 June 2025 15:41:07 +0000 (0:00:02.188) 0:00:08.029 ********** 2025-06-03 15:43:11.870249 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-06-03 15:43:11.870253 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-06-03 15:43:11.870257 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-06-03 15:43:11.870261 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:43:11.870265 | orchestrator | 2025-06-03 15:43:11.870269 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-06-03 15:43:11.870283 | orchestrator | Tuesday 03 June 2025 15:41:07 +0000 (0:00:00.394) 0:00:08.423 ********** 2025-06-03 15:43:11.870289 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-06-03 15:43:11.870295 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-06-03 15:43:11.870299 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-06-03 15:43:11.870303 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:43:11.870307 | orchestrator | 2025-06-03 15:43:11.870311 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-06-03 15:43:11.870315 | orchestrator | Tuesday 03 June 2025 15:41:08 +0000 (0:00:00.736) 0:00:09.159 ********** 2025-06-03 15:43:11.870320 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-06-03 15:43:11.870327 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-06-03 15:43:11.870331 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-06-03 15:43:11.870335 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:43:11.870339 | orchestrator | 2025-06-03 15:43:11.870343 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-06-03 15:43:11.870347 | orchestrator | Tuesday 03 June 2025 15:41:08 +0000 (0:00:00.156) 0:00:09.315 ********** 2025-06-03 15:43:11.870390 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '78f60fa69af2', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-06-03 15:41:05.796460', 'end': '2025-06-03 15:41:05.839669', 'delta': '0:00:00.043209', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['78f60fa69af2'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2025-06-03 15:43:11.870407 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'b834b7a5809a', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-06-03 15:41:06.508166', 'end': '2025-06-03 15:41:06.545299', 'delta': '0:00:00.037133', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['b834b7a5809a'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2025-06-03 15:43:11.870451 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '673a9bcd3b50', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-06-03 15:41:07.132387', 'end': '2025-06-03 15:41:07.172129', 'delta': '0:00:00.039742', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['673a9bcd3b50'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2025-06-03 15:43:11.870457 | orchestrator | 2025-06-03 15:43:11.870461 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-06-03 15:43:11.870465 | orchestrator | Tuesday 03 June 2025 15:41:08 +0000 (0:00:00.379) 0:00:09.695 ********** 2025-06-03 15:43:11.870471 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:43:11.870477 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:43:11.870482 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:43:11.870488 | orchestrator | 2025-06-03 15:43:11.870495 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-06-03 15:43:11.870558 | orchestrator | Tuesday 03 June 2025 15:41:09 +0000 (0:00:00.441) 0:00:10.137 ********** 2025-06-03 15:43:11.870569 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2025-06-03 15:43:11.870575 | orchestrator | 2025-06-03 15:43:11.870581 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-06-03 15:43:11.870586 | orchestrator | Tuesday 03 June 2025 15:41:11 +0000 (0:00:01.743) 0:00:11.881 ********** 2025-06-03 15:43:11.870592 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:43:11.870597 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:43:11.870603 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:43:11.870608 | orchestrator | 2025-06-03 15:43:11.870614 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-06-03 15:43:11.870620 | orchestrator | Tuesday 03 June 2025 15:41:11 +0000 (0:00:00.309) 0:00:12.190 ********** 2025-06-03 15:43:11.870626 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:43:11.870633 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:43:11.870639 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:43:11.870643 | orchestrator | 2025-06-03 15:43:11.870647 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-06-03 15:43:11.870656 | orchestrator | Tuesday 03 June 2025 15:41:11 +0000 (0:00:00.425) 0:00:12.616 ********** 2025-06-03 15:43:11.870660 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:43:11.870664 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:43:11.870668 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:43:11.870671 | orchestrator | 2025-06-03 15:43:11.870675 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-06-03 15:43:11.870679 | orchestrator | Tuesday 03 June 2025 15:41:12 +0000 (0:00:00.477) 0:00:13.094 ********** 2025-06-03 15:43:11.870682 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:43:11.870686 | orchestrator | 2025-06-03 15:43:11.870690 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-06-03 15:43:11.870693 | orchestrator | Tuesday 03 June 2025 15:41:12 +0000 (0:00:00.136) 0:00:13.230 ********** 2025-06-03 15:43:11.870697 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:43:11.870701 | orchestrator | 2025-06-03 15:43:11.870705 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-06-03 15:43:11.870708 | orchestrator | Tuesday 03 June 2025 15:41:12 +0000 (0:00:00.224) 0:00:13.455 ********** 2025-06-03 15:43:11.870712 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:43:11.870716 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:43:11.870719 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:43:11.870723 | orchestrator | 2025-06-03 15:43:11.870917 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-06-03 15:43:11.870922 | orchestrator | Tuesday 03 June 2025 15:41:13 +0000 (0:00:00.302) 0:00:13.757 ********** 2025-06-03 15:43:11.870926 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:43:11.870930 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:43:11.870933 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:43:11.870937 | orchestrator | 2025-06-03 15:43:11.870941 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-06-03 15:43:11.870945 | orchestrator | Tuesday 03 June 2025 15:41:13 +0000 (0:00:00.323) 0:00:14.081 ********** 2025-06-03 15:43:11.870949 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:43:11.870952 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:43:11.870956 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:43:11.870960 | orchestrator | 2025-06-03 15:43:11.870964 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-06-03 15:43:11.870968 | orchestrator | Tuesday 03 June 2025 15:41:13 +0000 (0:00:00.506) 0:00:14.587 ********** 2025-06-03 15:43:11.870971 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:43:11.870975 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:43:11.870979 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:43:11.870983 | orchestrator | 2025-06-03 15:43:11.870991 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-06-03 15:43:11.870995 | orchestrator | Tuesday 03 June 2025 15:41:14 +0000 (0:00:00.314) 0:00:14.902 ********** 2025-06-03 15:43:11.870999 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:43:11.871003 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:43:11.871006 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:43:11.871010 | orchestrator | 2025-06-03 15:43:11.871014 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-06-03 15:43:11.871018 | orchestrator | Tuesday 03 June 2025 15:41:14 +0000 (0:00:00.326) 0:00:15.228 ********** 2025-06-03 15:43:11.871021 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:43:11.871025 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:43:11.871029 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:43:11.871033 | orchestrator | 2025-06-03 15:43:11.871037 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-06-03 15:43:11.871047 | orchestrator | Tuesday 03 June 2025 15:41:14 +0000 (0:00:00.304) 0:00:15.533 ********** 2025-06-03 15:43:11.871051 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:43:11.871054 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:43:11.871058 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:43:11.871067 | orchestrator | 2025-06-03 15:43:11.871071 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-06-03 15:43:11.871075 | orchestrator | Tuesday 03 June 2025 15:41:15 +0000 (0:00:00.506) 0:00:16.040 ********** 2025-06-03 15:43:11.871079 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--5a262827--4eba--5d37--ab06--09e1d49a835c-osd--block--5a262827--4eba--5d37--ab06--09e1d49a835c', 'dm-uuid-LVM-iCTuAI4EJib0jwbvb8c4dXUAVjPvH6yyQD7EGdtmsu0AgRLszQFCT51KxWbLYqCJ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-03 15:43:11.871085 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d47078ac--4564--569b--bfa7--6d988d420f95-osd--block--d47078ac--4564--569b--bfa7--6d988d420f95', 'dm-uuid-LVM-MlNOD7DMw9sVFxWua6nlui2P6JGLIXhA9i9s0R6rxyRXeXmxqEKjHCeK1WnDSagY'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-03 15:43:11.871090 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-03 15:43:11.871096 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-03 15:43:11.871100 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-03 15:43:11.871104 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-03 15:43:11.871111 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-03 15:43:11.871120 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f00e4ac9--9831--582f--92bc--f2b318630797-osd--block--f00e4ac9--9831--582f--92bc--f2b318630797', 'dm-uuid-LVM-99pp97M8vSiq1DcdfNowOmyxQeBHt2RQXSbZdQTdzI57JNQcp5rC1M7FuVrcNJ3v'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-03 15:43:11.871128 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-03 15:43:11.871132 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2547461e--5dcb--5046--b3ed--0a182c83d3a8-osd--block--2547461e--5dcb--5046--b3ed--0a182c83d3a8', 'dm-uuid-LVM-9FhNrjVXl0cAWcs1aJgZ36y2TkiyPUOoyyVrKRwL2rhhw9kJzyHtCgnDt7vmQNTt'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-03 15:43:11.871137 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-03 15:43:11.871141 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-03 15:43:11.871144 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-03 15:43:11.871148 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-03 15:43:11.871161 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7e1af086-74b9-4b96-b1ab-e1589a6f5143', 'scsi-SQEMU_QEMU_HARDDISK_7e1af086-74b9-4b96-b1ab-e1589a6f5143'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7e1af086-74b9-4b96-b1ab-e1589a6f5143-part1', 'scsi-SQEMU_QEMU_HARDDISK_7e1af086-74b9-4b96-b1ab-e1589a6f5143-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7e1af086-74b9-4b96-b1ab-e1589a6f5143-part14', 'scsi-SQEMU_QEMU_HARDDISK_7e1af086-74b9-4b96-b1ab-e1589a6f5143-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7e1af086-74b9-4b96-b1ab-e1589a6f5143-part15', 'scsi-SQEMU_QEMU_HARDDISK_7e1af086-74b9-4b96-b1ab-e1589a6f5143-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7e1af086-74b9-4b96-b1ab-e1589a6f5143-part16', 'scsi-SQEMU_QEMU_HARDDISK_7e1af086-74b9-4b96-b1ab-e1589a6f5143-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-03 15:43:11.871170 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-03 15:43:11.871175 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--5a262827--4eba--5d37--ab06--09e1d49a835c-osd--block--5a262827--4eba--5d37--ab06--09e1d49a835c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-emfxfm-5qIT-TG7n-rhmg-KsOA-8KKz-w6ga7w', 'scsi-0QEMU_QEMU_HARDDISK_5c901d52-eede-42c5-873c-7ade3ca032e1', 'scsi-SQEMU_QEMU_HARDDISK_5c901d52-eede-42c5-873c-7ade3ca032e1'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-03 15:43:11.871180 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-03 15:43:11.871183 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--d47078ac--4564--569b--bfa7--6d988d420f95-osd--block--d47078ac--4564--569b--bfa7--6d988d420f95'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-WBkiQf-THpz-Svwy-wmks-s5gt-2CGA-7xevri', 'scsi-0QEMU_QEMU_HARDDISK_b4ac7e97-dff3-4114-bb9f-c387d4fd8c04', 'scsi-SQEMU_QEMU_HARDDISK_b4ac7e97-dff3-4114-bb9f-c387d4fd8c04'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-03 15:43:11.871188 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-03 15:43:11.871208 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_61b072b3-0d8d-4456-975d-55fef61370d3', 'scsi-SQEMU_QEMU_HARDDISK_61b072b3-0d8d-4456-975d-55fef61370d3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-03 15:43:11.871213 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-03 15:43:11.871217 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-03-14-50-16-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-03 15:43:11.871221 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-03 15:43:11.871225 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:43:11.871229 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-03 15:43:11.871233 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--610c71bb--335d--5813--8d53--12327c30775e-osd--block--610c71bb--335d--5813--8d53--12327c30775e', 'dm-uuid-LVM-oBbrD2y50tGUGcJrG9aMf1XrpfBgDTcIpQggtVkRRBZCLEs5YgTracTrTruq7mo4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-03 15:43:11.871290 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ae8860ce--b651--5449--9c0b--e6c018225b94-osd--block--ae8860ce--b651--5449--9c0b--e6c018225b94', 'dm-uuid-LVM-hyyfRBsGTLzhJDBnkMwP7oIAf3aNljpPZneZ7Y2rVIKSrikYC813zvaJkJ2cAlU8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-03 15:43:11.871313 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f6db3371-ad49-4dd9-a193-0ba30b3292ba', 'scsi-SQEMU_QEMU_HARDDISK_f6db3371-ad49-4dd9-a193-0ba30b3292ba'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f6db3371-ad49-4dd9-a193-0ba30b3292ba-part1', 'scsi-SQEMU_QEMU_HARDDISK_f6db3371-ad49-4dd9-a193-0ba30b3292ba-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f6db3371-ad49-4dd9-a193-0ba30b3292ba-part14', 'scsi-SQEMU_QEMU_HARDDISK_f6db3371-ad49-4dd9-a193-0ba30b3292ba-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f6db3371-ad49-4dd9-a193-0ba30b3292ba-part15', 'scsi-SQEMU_QEMU_HARDDISK_f6db3371-ad49-4dd9-a193-0ba30b3292ba-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f6db3371-ad49-4dd9-a193-0ba30b3292ba-part16', 'scsi-SQEMU_QEMU_HARDDISK_f6db3371-ad49-4dd9-a193-0ba30b3292ba-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-03 15:43:11.871320 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-03 15:43:11.871327 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--f00e4ac9--9831--582f--92bc--f2b318630797-osd--block--f00e4ac9--9831--582f--92bc--f2b318630797'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Ps2wUN-woOp-sUfc-DGCH-velx-EbWq-ZqQ5PA', 'scsi-0QEMU_QEMU_HARDDISK_88cf38eb-fdbf-404b-9f1d-cd32f6bedf4b', 'scsi-SQEMU_QEMU_HARDDISK_88cf38eb-fdbf-404b-9f1d-cd32f6bedf4b'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-03 15:43:11.871333 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-03 15:43:11.871345 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--2547461e--5dcb--5046--b3ed--0a182c83d3a8-osd--block--2547461e--5dcb--5046--b3ed--0a182c83d3a8'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-gCazGF-eiF7-zfd2-va82-leUV-Ddn3-wTNAz7', 'scsi-0QEMU_QEMU_HARDDISK_35e8ec34-b9aa-4705-9105-50464be240ba', 'scsi-SQEMU_QEMU_HARDDISK_35e8ec34-b9aa-4705-9105-50464be240ba'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-03 15:43:11.871365 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-03 15:43:11.871372 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b1c5376b-f7c7-4aac-a0b2-3df8be7d9631', 'scsi-SQEMU_QEMU_HARDDISK_b1c5376b-f7c7-4aac-a0b2-3df8be7d9631'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-03 15:43:11.871377 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-03 15:43:11.871383 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-03-14-50-21-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-03 15:43:11.871389 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-03 15:43:11.871675 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:43:11.871701 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-03 15:43:11.871708 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-03 15:43:11.871723 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-03 15:43:11.871745 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ec1efc19-1b1e-4f39-8db8-97e27f5004aa', 'scsi-SQEMU_QEMU_HARDDISK_ec1efc19-1b1e-4f39-8db8-97e27f5004aa'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ec1efc19-1b1e-4f39-8db8-97e27f5004aa-part1', 'scsi-SQEMU_QEMU_HARDDISK_ec1efc19-1b1e-4f39-8db8-97e27f5004aa-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ec1efc19-1b1e-4f39-8db8-97e27f5004aa-part14', 'scsi-SQEMU_QEMU_HARDDISK_ec1efc19-1b1e-4f39-8db8-97e27f5004aa-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ec1efc19-1b1e-4f39-8db8-97e27f5004aa-part15', 'scsi-SQEMU_QEMU_HARDDISK_ec1efc19-1b1e-4f39-8db8-97e27f5004aa-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ec1efc19-1b1e-4f39-8db8-97e27f5004aa-part16', 'scsi-SQEMU_QEMU_HARDDISK_ec1efc19-1b1e-4f39-8db8-97e27f5004aa-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-03 15:43:11.871753 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--610c71bb--335d--5813--8d53--12327c30775e-osd--block--610c71bb--335d--5813--8d53--12327c30775e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-C83Vqp-oUHH-KYth-6H1z-1jr1-Nk57-4zq1JG', 'scsi-0QEMU_QEMU_HARDDISK_fa411336-a154-4770-b6c1-ce8fec2c95f2', 'scsi-SQEMU_QEMU_HARDDISK_fa411336-a154-4770-b6c1-ce8fec2c95f2'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-03 15:43:11.871761 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--ae8860ce--b651--5449--9c0b--e6c018225b94-osd--block--ae8860ce--b651--5449--9c0b--e6c018225b94'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-P8VhYf-9wK2-3uzP-XeZ8-f6el-w4Mt-XILiP3', 'scsi-0QEMU_QEMU_HARDDISK_ffe2a0ca-5a38-47a9-803d-00b473435346', 'scsi-SQEMU_QEMU_HARDDISK_ffe2a0ca-5a38-47a9-803d-00b473435346'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-03 15:43:11.871774 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ed092372-9559-4d48-8a48-c44bdb9ee908', 'scsi-SQEMU_QEMU_HARDDISK_ed092372-9559-4d48-8a48-c44bdb9ee908'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-03 15:43:11.871785 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-03-14-50-23-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-03 15:43:11.871791 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:43:11.871797 | orchestrator | 2025-06-03 15:43:11.871803 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-06-03 15:43:11.871809 | orchestrator | Tuesday 03 June 2025 15:41:15 +0000 (0:00:00.560) 0:00:16.601 ********** 2025-06-03 15:43:11.871816 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--5a262827--4eba--5d37--ab06--09e1d49a835c-osd--block--5a262827--4eba--5d37--ab06--09e1d49a835c', 'dm-uuid-LVM-iCTuAI4EJib0jwbvb8c4dXUAVjPvH6yyQD7EGdtmsu0AgRLszQFCT51KxWbLYqCJ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:43:11.871823 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d47078ac--4564--569b--bfa7--6d988d420f95-osd--block--d47078ac--4564--569b--bfa7--6d988d420f95', 'dm-uuid-LVM-MlNOD7DMw9sVFxWua6nlui2P6JGLIXhA9i9s0R6rxyRXeXmxqEKjHCeK1WnDSagY'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:43:11.871829 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:43:11.871835 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:43:11.871850 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:43:11.871861 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f00e4ac9--9831--582f--92bc--f2b318630797-osd--block--f00e4ac9--9831--582f--92bc--f2b318630797', 'dm-uuid-LVM-99pp97M8vSiq1DcdfNowOmyxQeBHt2RQXSbZdQTdzI57JNQcp5rC1M7FuVrcNJ3v'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:43:11.871868 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2547461e--5dcb--5046--b3ed--0a182c83d3a8-osd--block--2547461e--5dcb--5046--b3ed--0a182c83d3a8', 'dm-uuid-LVM-9FhNrjVXl0cAWcs1aJgZ36y2TkiyPUOoyyVrKRwL2rhhw9kJzyHtCgnDt7vmQNTt'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:43:11.871874 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:43:11.871880 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:43:11.871892 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:43:11.871903 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:43:11.871918 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:43:11.871924 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:43:11.871929 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:43:11.871936 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:43:11.871941 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:43:11.871952 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:43:11.871970 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7e1af086-74b9-4b96-b1ab-e1589a6f5143', 'scsi-SQEMU_QEMU_HARDDISK_7e1af086-74b9-4b96-b1ab-e1589a6f5143'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7e1af086-74b9-4b96-b1ab-e1589a6f5143-part1', 'scsi-SQEMU_QEMU_HARDDISK_7e1af086-74b9-4b96-b1ab-e1589a6f5143-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7e1af086-74b9-4b96-b1ab-e1589a6f5143-part14', 'scsi-SQEMU_QEMU_HARDDISK_7e1af086-74b9-4b96-b1ab-e1589a6f5143-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7e1af086-74b9-4b96-b1ab-e1589a6f5143-part15', 'scsi-SQEMU_QEMU_HARDDISK_7e1af086-74b9-4b96-b1ab-e1589a6f5143-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7e1af086-74b9-4b96-b1ab-e1589a6f5143-part16', 'scsi-SQEMU_QEMU_HARDDISK_7e1af086-74b9-4b96-b1ab-e1589a6f5143-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:43:11.871978 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--610c71bb--335d--5813--8d53--12327c30775e-osd--block--610c71bb--335d--5813--8d53--12327c30775e', 'dm-uuid-LVM-oBbrD2y50tGUGcJrG9aMf1XrpfBgDTcIpQggtVkRRBZCLEs5YgTracTrTruq7mo4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:43:11.871988 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:43:11.871998 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--5a262827--4eba--5d37--ab06--09e1d49a835c-osd--block--5a262827--4eba--5d37--ab06--09e1d49a835c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-emfxfm-5qIT-TG7n-rhmg-KsOA-8KKz-w6ga7w', 'scsi-0QEMU_QEMU_HARDDISK_5c901d52-eede-42c5-873c-7ade3ca032e1', 'scsi-SQEMU_QEMU_HARDDISK_5c901d52-eede-42c5-873c-7ade3ca032e1'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:43:11.872013 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ae8860ce--b651--5449--9c0b--e6c018225b94-osd--block--ae8860ce--b651--5449--9c0b--e6c018225b94', 'dm-uuid-LVM-hyyfRBsGTLzhJDBnkMwP7oIAf3aNljpPZneZ7Y2rVIKSrikYC813zvaJkJ2cAlU8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:43:11.872021 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:43:11.872027 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--d47078ac--4564--569b--bfa7--6d988d420f95-osd--block--d47078ac--4564--569b--bfa7--6d988d420f95'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-WBkiQf-THpz-Svwy-wmks-s5gt-2CGA-7xevri', 'scsi-0QEMU_QEMU_HARDDISK_b4ac7e97-dff3-4114-bb9f-c387d4fd8c04', 'scsi-SQEMU_QEMU_HARDDISK_b4ac7e97-dff3-4114-bb9f-c387d4fd8c04'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:43:11.872037 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:43:11.872046 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:43:11.872061 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_61b072b3-0d8d-4456-975d-55fef61370d3', 'scsi-SQEMU_QEMU_HARDDISK_61b072b3-0d8d-4456-975d-55fef61370d3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:43:11.872091 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:43:11.872098 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f6db3371-ad49-4dd9-a193-0ba30b3292ba', 'scsi-SQEMU_QEMU_HARDDISK_f6db3371-ad49-4dd9-a193-0ba30b3292ba'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f6db3371-ad49-4dd9-a193-0ba30b3292ba-part1', 'scsi-SQEMU_QEMU_HARDDISK_f6db3371-ad49-4dd9-a193-0ba30b3292ba-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f6db3371-ad49-4dd9-a193-0ba30b3292ba-part14', 'scsi-SQEMU_QEMU_HARDDISK_f6db3371-ad49-4dd9-a193-0ba30b3292ba-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f6db3371-ad49-4dd9-a193-0ba30b3292ba-part15', 'scsi-SQEMU_QEMU_HARDDISK_f6db3371-ad49-4dd9-a193-0ba30b3292ba-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f6db3371-ad49-4dd9-a193-0ba30b3292ba-part16', 'scsi-SQEMU_QEMU_HARDDISK_f6db3371-ad49-4dd9-a193-0ba30b3292ba-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:43:11.872114 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:43:11.872125 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-03-14-50-16-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:43:11.872132 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--f00e4ac9--9831--582f--92bc--f2b318630797-osd--block--f00e4ac9--9831--582f--92bc--f2b318630797'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Ps2wUN-woOp-sUfc-DGCH-velx-EbWq-ZqQ5PA', 'scsi-0QEMU_QEMU_HARDDISK_88cf38eb-fdbf-404b-9f1d-cd32f6bedf4b', 'scsi-SQEMU_QEMU_HARDDISK_88cf38eb-fdbf-404b-9f1d-cd32f6bedf4b'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:43:11.872138 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:43:11.872144 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--2547461e--5dcb--5046--b3ed--0a182c83d3a8-osd--block--2547461e--5dcb--5046--b3ed--0a182c83d3a8'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-gCazGF-eiF7-zfd2-va82-leUV-Ddn3-wTNAz7', 'scsi-0QEMU_QEMU_HARDDISK_35e8ec34-b9aa-4705-9105-50464be240ba', 'scsi-SQEMU_QEMU_HARDDISK_35e8ec34-b9aa-4705-9105-50464be240ba'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:43:11.872155 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:43:11.872164 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b1c5376b-f7c7-4aac-a0b2-3df8be7d9631', 'scsi-SQEMU_QEMU_HARDDISK_b1c5376b-f7c7-4aac-a0b2-3df8be7d9631'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:43:11.872173 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:43:11.872179 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-03-14-50-21-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:43:11.872185 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:43:11.872192 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:43:11.872198 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:43:11.872209 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:43:11.872223 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ec1efc19-1b1e-4f39-8db8-97e27f5004aa', 'scsi-SQEMU_QEMU_HARDDISK_ec1efc19-1b1e-4f39-8db8-97e27f5004aa'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ec1efc19-1b1e-4f39-8db8-97e27f5004aa-part1', 'scsi-SQEMU_QEMU_HARDDISK_ec1efc19-1b1e-4f39-8db8-97e27f5004aa-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ec1efc19-1b1e-4f39-8db8-97e27f5004aa-part14', 'scsi-SQEMU_QEMU_HARDDISK_ec1efc19-1b1e-4f39-8db8-97e27f5004aa-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ec1efc19-1b1e-4f39-8db8-97e27f5004aa-part15', 'scsi-SQEMU_QEMU_HARDDISK_ec1efc19-1b1e-4f39-8db8-97e27f5004aa-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ec1efc19-1b1e-4f39-8db8-97e27f5004aa-part16', 'scsi-SQEMU_QEMU_HARDDISK_ec1efc19-1b1e-4f39-8db8-97e27f5004aa-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:43:11.872230 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--610c71bb--335d--5813--8d53--12327c30775e-osd--block--610c71bb--335d--5813--8d53--12327c30775e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-C83Vqp-oUHH-KYth-6H1z-1jr1-Nk57-4zq1JG', 'scsi-0QEMU_QEMU_HARDDISK_fa411336-a154-4770-b6c1-ce8fec2c95f2', 'scsi-SQEMU_QEMU_HARDDISK_fa411336-a154-4770-b6c1-ce8fec2c95f2'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:43:11.872242 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--ae8860ce--b651--5449--9c0b--e6c018225b94-osd--block--ae8860ce--b651--5449--9c0b--e6c018225b94'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-P8VhYf-9wK2-3uzP-XeZ8-f6el-w4Mt-XILiP3', 'scsi-0QEMU_QEMU_HARDDISK_ffe2a0ca-5a38-47a9-803d-00b473435346', 'scsi-SQEMU_QEMU_HARDDISK_ffe2a0ca-5a38-47a9-803d-00b473435346'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:43:11.872252 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ed092372-9559-4d48-8a48-c44bdb9ee908', 'scsi-SQEMU_QEMU_HARDDISK_ed092372-9559-4d48-8a48-c44bdb9ee908'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:43:11.872263 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-03-14-50-23-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-03 15:43:11.872269 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:43:11.872275 | orchestrator | 2025-06-03 15:43:11.872281 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-06-03 15:43:11.872286 | orchestrator | Tuesday 03 June 2025 15:41:16 +0000 (0:00:00.571) 0:00:17.172 ********** 2025-06-03 15:43:11.872290 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:43:11.872294 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:43:11.872298 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:43:11.872301 | orchestrator | 2025-06-03 15:43:11.872305 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-06-03 15:43:11.872309 | orchestrator | Tuesday 03 June 2025 15:41:17 +0000 (0:00:00.701) 0:00:17.874 ********** 2025-06-03 15:43:11.872313 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:43:11.872317 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:43:11.872320 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:43:11.872324 | orchestrator | 2025-06-03 15:43:11.872328 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-06-03 15:43:11.872336 | orchestrator | Tuesday 03 June 2025 15:41:17 +0000 (0:00:00.453) 0:00:18.327 ********** 2025-06-03 15:43:11.872339 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:43:11.872343 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:43:11.872347 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:43:11.872351 | orchestrator | 2025-06-03 15:43:11.872354 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-06-03 15:43:11.872358 | orchestrator | Tuesday 03 June 2025 15:41:18 +0000 (0:00:00.699) 0:00:19.026 ********** 2025-06-03 15:43:11.872362 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:43:11.872366 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:43:11.872370 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:43:11.872373 | orchestrator | 2025-06-03 15:43:11.872377 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-06-03 15:43:11.872381 | orchestrator | Tuesday 03 June 2025 15:41:18 +0000 (0:00:00.272) 0:00:19.299 ********** 2025-06-03 15:43:11.872384 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:43:11.872388 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:43:11.872392 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:43:11.872396 | orchestrator | 2025-06-03 15:43:11.872399 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-06-03 15:43:11.872403 | orchestrator | Tuesday 03 June 2025 15:41:18 +0000 (0:00:00.415) 0:00:19.714 ********** 2025-06-03 15:43:11.872407 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:43:11.872411 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:43:11.872414 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:43:11.872440 | orchestrator | 2025-06-03 15:43:11.872446 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-06-03 15:43:11.872449 | orchestrator | Tuesday 03 June 2025 15:41:19 +0000 (0:00:00.503) 0:00:20.218 ********** 2025-06-03 15:43:11.872454 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-06-03 15:43:11.872458 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-06-03 15:43:11.872462 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-06-03 15:43:11.872466 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-06-03 15:43:11.872469 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-06-03 15:43:11.872473 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-06-03 15:43:11.872477 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-06-03 15:43:11.872480 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-06-03 15:43:11.872484 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-06-03 15:43:11.872488 | orchestrator | 2025-06-03 15:43:11.872492 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-06-03 15:43:11.872495 | orchestrator | Tuesday 03 June 2025 15:41:20 +0000 (0:00:00.819) 0:00:21.037 ********** 2025-06-03 15:43:11.872499 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-06-03 15:43:11.872503 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-06-03 15:43:11.872507 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-06-03 15:43:11.872510 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:43:11.872514 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-06-03 15:43:11.872518 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-06-03 15:43:11.872521 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-06-03 15:43:11.872525 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:43:11.872529 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-06-03 15:43:11.872533 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-06-03 15:43:11.872536 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-06-03 15:43:11.872540 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:43:11.872544 | orchestrator | 2025-06-03 15:43:11.872550 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-06-03 15:43:11.872560 | orchestrator | Tuesday 03 June 2025 15:41:20 +0000 (0:00:00.367) 0:00:21.405 ********** 2025-06-03 15:43:11.872564 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-03 15:43:11.872568 | orchestrator | 2025-06-03 15:43:11.872572 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-06-03 15:43:11.872578 | orchestrator | Tuesday 03 June 2025 15:41:21 +0000 (0:00:00.696) 0:00:22.101 ********** 2025-06-03 15:43:11.872582 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:43:11.872585 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:43:11.872589 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:43:11.872593 | orchestrator | 2025-06-03 15:43:11.872599 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-06-03 15:43:11.872603 | orchestrator | Tuesday 03 June 2025 15:41:21 +0000 (0:00:00.310) 0:00:22.412 ********** 2025-06-03 15:43:11.872607 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:43:11.872611 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:43:11.872615 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:43:11.872619 | orchestrator | 2025-06-03 15:43:11.872622 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-06-03 15:43:11.872626 | orchestrator | Tuesday 03 June 2025 15:41:21 +0000 (0:00:00.293) 0:00:22.705 ********** 2025-06-03 15:43:11.872630 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:43:11.872634 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:43:11.872637 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:43:11.872641 | orchestrator | 2025-06-03 15:43:11.872645 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-06-03 15:43:11.872648 | orchestrator | Tuesday 03 June 2025 15:41:22 +0000 (0:00:00.301) 0:00:23.007 ********** 2025-06-03 15:43:11.872652 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:43:11.872656 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:43:11.872660 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:43:11.872664 | orchestrator | 2025-06-03 15:43:11.872667 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-06-03 15:43:11.872671 | orchestrator | Tuesday 03 June 2025 15:41:22 +0000 (0:00:00.563) 0:00:23.571 ********** 2025-06-03 15:43:11.872675 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-03 15:43:11.872679 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-03 15:43:11.872682 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-03 15:43:11.872686 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:43:11.872690 | orchestrator | 2025-06-03 15:43:11.872694 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-06-03 15:43:11.872697 | orchestrator | Tuesday 03 June 2025 15:41:23 +0000 (0:00:00.365) 0:00:23.936 ********** 2025-06-03 15:43:11.872701 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-03 15:43:11.872705 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-03 15:43:11.872709 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-03 15:43:11.872712 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:43:11.872716 | orchestrator | 2025-06-03 15:43:11.872720 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-06-03 15:43:11.872723 | orchestrator | Tuesday 03 June 2025 15:41:23 +0000 (0:00:00.369) 0:00:24.306 ********** 2025-06-03 15:43:11.872727 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-03 15:43:11.872731 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-03 15:43:11.872735 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-03 15:43:11.872738 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:43:11.872742 | orchestrator | 2025-06-03 15:43:11.872746 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-06-03 15:43:11.872750 | orchestrator | Tuesday 03 June 2025 15:41:23 +0000 (0:00:00.340) 0:00:24.647 ********** 2025-06-03 15:43:11.872758 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:43:11.872762 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:43:11.872765 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:43:11.872769 | orchestrator | 2025-06-03 15:43:11.872773 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-06-03 15:43:11.872777 | orchestrator | Tuesday 03 June 2025 15:41:24 +0000 (0:00:00.325) 0:00:24.972 ********** 2025-06-03 15:43:11.872780 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-06-03 15:43:11.872784 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-06-03 15:43:11.872788 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-06-03 15:43:11.872792 | orchestrator | 2025-06-03 15:43:11.872795 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-06-03 15:43:11.872799 | orchestrator | Tuesday 03 June 2025 15:41:24 +0000 (0:00:00.492) 0:00:25.464 ********** 2025-06-03 15:43:11.872803 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-06-03 15:43:11.872807 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-03 15:43:11.872811 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-03 15:43:11.872814 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-06-03 15:43:11.872818 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-06-03 15:43:11.872822 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-06-03 15:43:11.872826 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-06-03 15:43:11.872830 | orchestrator | 2025-06-03 15:43:11.872834 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-06-03 15:43:11.872837 | orchestrator | Tuesday 03 June 2025 15:41:25 +0000 (0:00:00.935) 0:00:26.400 ********** 2025-06-03 15:43:11.872844 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-06-03 15:43:11.872848 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-03 15:43:11.872852 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-03 15:43:11.872856 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-06-03 15:43:11.872859 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-06-03 15:43:11.872863 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-06-03 15:43:11.872867 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-06-03 15:43:11.872871 | orchestrator | 2025-06-03 15:43:11.872877 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2025-06-03 15:43:11.872881 | orchestrator | Tuesday 03 June 2025 15:41:27 +0000 (0:00:01.934) 0:00:28.334 ********** 2025-06-03 15:43:11.872885 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:43:11.872888 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:43:11.872892 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2025-06-03 15:43:11.872896 | orchestrator | 2025-06-03 15:43:11.872900 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2025-06-03 15:43:11.872904 | orchestrator | Tuesday 03 June 2025 15:41:28 +0000 (0:00:00.391) 0:00:28.726 ********** 2025-06-03 15:43:11.872908 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-06-03 15:43:11.872913 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-06-03 15:43:11.872920 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-06-03 15:43:11.872924 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-06-03 15:43:11.872928 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-06-03 15:43:11.872932 | orchestrator | 2025-06-03 15:43:11.872936 | orchestrator | TASK [generate keys] *********************************************************** 2025-06-03 15:43:11.872940 | orchestrator | Tuesday 03 June 2025 15:42:13 +0000 (0:00:45.508) 0:01:14.234 ********** 2025-06-03 15:43:11.872943 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-03 15:43:11.872947 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-03 15:43:11.872951 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-03 15:43:11.872955 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-03 15:43:11.872958 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-03 15:43:11.872962 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-03 15:43:11.872966 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2025-06-03 15:43:11.872969 | orchestrator | 2025-06-03 15:43:11.872973 | orchestrator | TASK [get keys from monitors] ************************************************** 2025-06-03 15:43:11.872977 | orchestrator | Tuesday 03 June 2025 15:42:38 +0000 (0:00:25.052) 0:01:39.286 ********** 2025-06-03 15:43:11.872981 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-03 15:43:11.872984 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-03 15:43:11.872988 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-03 15:43:11.872992 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-03 15:43:11.872995 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-03 15:43:11.872999 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-03 15:43:11.873003 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-06-03 15:43:11.873007 | orchestrator | 2025-06-03 15:43:11.873010 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2025-06-03 15:43:11.873016 | orchestrator | Tuesday 03 June 2025 15:42:51 +0000 (0:00:12.910) 0:01:52.197 ********** 2025-06-03 15:43:11.873020 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-03 15:43:11.873024 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-03 15:43:11.873028 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-03 15:43:11.873148 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-03 15:43:11.873158 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-03 15:43:11.873164 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-03 15:43:11.873182 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-03 15:43:11.873188 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-03 15:43:11.873192 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-03 15:43:11.873196 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-03 15:43:11.873199 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-03 15:43:11.873203 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-03 15:43:11.873207 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-03 15:43:11.873211 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-03 15:43:11.873214 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-03 15:43:11.873218 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-03 15:43:11.873222 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-03 15:43:11.873225 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-03 15:43:11.873229 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2025-06-03 15:43:11.873233 | orchestrator | 2025-06-03 15:43:11.873237 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-03 15:43:11.873241 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2025-06-03 15:43:11.873245 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-06-03 15:43:11.873249 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-06-03 15:43:11.873253 | orchestrator | 2025-06-03 15:43:11.873257 | orchestrator | 2025-06-03 15:43:11.873261 | orchestrator | 2025-06-03 15:43:11.873264 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-03 15:43:11.873268 | orchestrator | Tuesday 03 June 2025 15:43:08 +0000 (0:00:17.070) 0:02:09.268 ********** 2025-06-03 15:43:11.873272 | orchestrator | =============================================================================== 2025-06-03 15:43:11.873276 | orchestrator | create openstack pool(s) ----------------------------------------------- 45.51s 2025-06-03 15:43:11.873280 | orchestrator | generate keys ---------------------------------------------------------- 25.05s 2025-06-03 15:43:11.873283 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 17.07s 2025-06-03 15:43:11.873287 | orchestrator | get keys from monitors ------------------------------------------------- 12.91s 2025-06-03 15:43:11.873291 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.19s 2025-06-03 15:43:11.873294 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 1.93s 2025-06-03 15:43:11.873298 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.74s 2025-06-03 15:43:11.873302 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 0.94s 2025-06-03 15:43:11.873306 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 0.82s 2025-06-03 15:43:11.873309 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.80s 2025-06-03 15:43:11.873313 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.74s 2025-06-03 15:43:11.873317 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.70s 2025-06-03 15:43:11.873320 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.70s 2025-06-03 15:43:11.873324 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.70s 2025-06-03 15:43:11.873331 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.64s 2025-06-03 15:43:11.873335 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.63s 2025-06-03 15:43:11.873339 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 0.63s 2025-06-03 15:43:11.873343 | orchestrator | ceph-facts : Set_fact discovered_interpreter_python if not previously set --- 0.60s 2025-06-03 15:43:11.873346 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.57s 2025-06-03 15:43:11.873350 | orchestrator | ceph-facts : Set_fact _radosgw_address to radosgw_address --------------- 0.56s 2025-06-03 15:43:11.873358 | orchestrator | 2025-06-03 15:43:11 | INFO  | Task 3d4d01b4-8977-4bbc-825f-6acecb667a8b is in state STARTED 2025-06-03 15:43:11.873362 | orchestrator | 2025-06-03 15:43:11 | INFO  | Task 370c44b3-02c1-45b9-a587-25a6f93fa861 is in state STARTED 2025-06-03 15:43:11.873366 | orchestrator | 2025-06-03 15:43:11 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:43:14.904519 | orchestrator | 2025-06-03 15:43:14 | INFO  | Task d8ab598c-d92d-47d3-97dc-f1b022ff522b is in state STARTED 2025-06-03 15:43:14.907829 | orchestrator | 2025-06-03 15:43:14 | INFO  | Task 3d4d01b4-8977-4bbc-825f-6acecb667a8b is in state STARTED 2025-06-03 15:43:14.910136 | orchestrator | 2025-06-03 15:43:14 | INFO  | Task 370c44b3-02c1-45b9-a587-25a6f93fa861 is in state STARTED 2025-06-03 15:43:14.910211 | orchestrator | 2025-06-03 15:43:14 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:43:17.951888 | orchestrator | 2025-06-03 15:43:17 | INFO  | Task d8ab598c-d92d-47d3-97dc-f1b022ff522b is in state STARTED 2025-06-03 15:43:17.954386 | orchestrator | 2025-06-03 15:43:17 | INFO  | Task 3d4d01b4-8977-4bbc-825f-6acecb667a8b is in state STARTED 2025-06-03 15:43:17.956748 | orchestrator | 2025-06-03 15:43:17 | INFO  | Task 370c44b3-02c1-45b9-a587-25a6f93fa861 is in state STARTED 2025-06-03 15:43:17.957193 | orchestrator | 2025-06-03 15:43:17 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:43:21.008220 | orchestrator | 2025-06-03 15:43:21 | INFO  | Task d8ab598c-d92d-47d3-97dc-f1b022ff522b is in state STARTED 2025-06-03 15:43:21.009862 | orchestrator | 2025-06-03 15:43:21 | INFO  | Task 3d4d01b4-8977-4bbc-825f-6acecb667a8b is in state STARTED 2025-06-03 15:43:21.011732 | orchestrator | 2025-06-03 15:43:21 | INFO  | Task 370c44b3-02c1-45b9-a587-25a6f93fa861 is in state STARTED 2025-06-03 15:43:21.011797 | orchestrator | 2025-06-03 15:43:21 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:43:24.080379 | orchestrator | 2025-06-03 15:43:24 | INFO  | Task d8ab598c-d92d-47d3-97dc-f1b022ff522b is in state STARTED 2025-06-03 15:43:24.082680 | orchestrator | 2025-06-03 15:43:24 | INFO  | Task 3d4d01b4-8977-4bbc-825f-6acecb667a8b is in state STARTED 2025-06-03 15:43:24.083329 | orchestrator | 2025-06-03 15:43:24 | INFO  | Task 370c44b3-02c1-45b9-a587-25a6f93fa861 is in state STARTED 2025-06-03 15:43:24.083371 | orchestrator | 2025-06-03 15:43:24 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:43:27.121738 | orchestrator | 2025-06-03 15:43:27 | INFO  | Task d8ab598c-d92d-47d3-97dc-f1b022ff522b is in state STARTED 2025-06-03 15:43:27.122217 | orchestrator | 2025-06-03 15:43:27 | INFO  | Task 3d4d01b4-8977-4bbc-825f-6acecb667a8b is in state STARTED 2025-06-03 15:43:27.123453 | orchestrator | 2025-06-03 15:43:27 | INFO  | Task 370c44b3-02c1-45b9-a587-25a6f93fa861 is in state STARTED 2025-06-03 15:43:27.123496 | orchestrator | 2025-06-03 15:43:27 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:43:30.175044 | orchestrator | 2025-06-03 15:43:30 | INFO  | Task d8ab598c-d92d-47d3-97dc-f1b022ff522b is in state STARTED 2025-06-03 15:43:30.176615 | orchestrator | 2025-06-03 15:43:30 | INFO  | Task 3d4d01b4-8977-4bbc-825f-6acecb667a8b is in state STARTED 2025-06-03 15:43:30.177439 | orchestrator | 2025-06-03 15:43:30 | INFO  | Task 370c44b3-02c1-45b9-a587-25a6f93fa861 is in state STARTED 2025-06-03 15:43:30.177481 | orchestrator | 2025-06-03 15:43:30 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:43:33.216141 | orchestrator | 2025-06-03 15:43:33 | INFO  | Task d8ab598c-d92d-47d3-97dc-f1b022ff522b is in state STARTED 2025-06-03 15:43:33.218794 | orchestrator | 2025-06-03 15:43:33 | INFO  | Task 3d4d01b4-8977-4bbc-825f-6acecb667a8b is in state STARTED 2025-06-03 15:43:33.223012 | orchestrator | 2025-06-03 15:43:33 | INFO  | Task 370c44b3-02c1-45b9-a587-25a6f93fa861 is in state STARTED 2025-06-03 15:43:33.223082 | orchestrator | 2025-06-03 15:43:33 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:43:36.272359 | orchestrator | 2025-06-03 15:43:36 | INFO  | Task d8ab598c-d92d-47d3-97dc-f1b022ff522b is in state STARTED 2025-06-03 15:43:36.274184 | orchestrator | 2025-06-03 15:43:36 | INFO  | Task 3d4d01b4-8977-4bbc-825f-6acecb667a8b is in state STARTED 2025-06-03 15:43:36.277748 | orchestrator | 2025-06-03 15:43:36 | INFO  | Task 370c44b3-02c1-45b9-a587-25a6f93fa861 is in state SUCCESS 2025-06-03 15:43:36.277808 | orchestrator | 2025-06-03 15:43:36 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:43:36.279538 | orchestrator | 2025-06-03 15:43:36.279579 | orchestrator | 2025-06-03 15:43:36.279591 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-03 15:43:36.279603 | orchestrator | 2025-06-03 15:43:36.279614 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-03 15:43:36.279640 | orchestrator | Tuesday 03 June 2025 15:41:46 +0000 (0:00:00.198) 0:00:00.198 ********** 2025-06-03 15:43:36.279653 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:43:36.279663 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:43:36.279670 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:43:36.279676 | orchestrator | 2025-06-03 15:43:36.279683 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-03 15:43:36.279690 | orchestrator | Tuesday 03 June 2025 15:41:46 +0000 (0:00:00.229) 0:00:00.428 ********** 2025-06-03 15:43:36.279697 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2025-06-03 15:43:36.279704 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2025-06-03 15:43:36.279711 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2025-06-03 15:43:36.279718 | orchestrator | 2025-06-03 15:43:36.279724 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2025-06-03 15:43:36.279731 | orchestrator | 2025-06-03 15:43:36.279737 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-06-03 15:43:36.279747 | orchestrator | Tuesday 03 June 2025 15:41:47 +0000 (0:00:00.337) 0:00:00.765 ********** 2025-06-03 15:43:36.279759 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:43:36.279772 | orchestrator | 2025-06-03 15:43:36.279783 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2025-06-03 15:43:36.279794 | orchestrator | Tuesday 03 June 2025 15:41:47 +0000 (0:00:00.452) 0:00:01.217 ********** 2025-06-03 15:43:36.279811 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-03 15:43:36.279873 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-03 15:43:36.279888 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-03 15:43:36.279909 | orchestrator | 2025-06-03 15:43:36.279917 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2025-06-03 15:43:36.279923 | orchestrator | Tuesday 03 June 2025 15:41:48 +0000 (0:00:00.981) 0:00:02.198 ********** 2025-06-03 15:43:36.279930 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:43:36.279937 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:43:36.279943 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:43:36.279950 | orchestrator | 2025-06-03 15:43:36.279957 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-06-03 15:43:36.279963 | orchestrator | Tuesday 03 June 2025 15:41:49 +0000 (0:00:00.377) 0:00:02.576 ********** 2025-06-03 15:43:36.279970 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2025-06-03 15:43:36.279981 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2025-06-03 15:43:36.279988 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2025-06-03 15:43:36.279998 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2025-06-03 15:43:36.280005 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2025-06-03 15:43:36.280011 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2025-06-03 15:43:36.280018 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2025-06-03 15:43:36.280024 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2025-06-03 15:43:36.280031 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2025-06-03 15:43:36.280038 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2025-06-03 15:43:36.280044 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2025-06-03 15:43:36.280051 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2025-06-03 15:43:36.280057 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2025-06-03 15:43:36.280064 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2025-06-03 15:43:36.280075 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2025-06-03 15:43:36.280085 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2025-06-03 15:43:36.280096 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2025-06-03 15:43:36.280105 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2025-06-03 15:43:36.280114 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2025-06-03 15:43:36.280123 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2025-06-03 15:43:36.280141 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2025-06-03 15:43:36.280153 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2025-06-03 15:43:36.280163 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2025-06-03 15:43:36.280173 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2025-06-03 15:43:36.280185 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2025-06-03 15:43:36.280198 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2025-06-03 15:43:36.280209 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2025-06-03 15:43:36.280220 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2025-06-03 15:43:36.280231 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2025-06-03 15:43:36.280239 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2025-06-03 15:43:36.280247 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2025-06-03 15:43:36.280254 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2025-06-03 15:43:36.280262 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2025-06-03 15:43:36.280271 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2025-06-03 15:43:36.280278 | orchestrator | 2025-06-03 15:43:36.280286 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-03 15:43:36.280294 | orchestrator | Tuesday 03 June 2025 15:41:49 +0000 (0:00:00.671) 0:00:03.247 ********** 2025-06-03 15:43:36.280301 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:43:36.280308 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:43:36.280314 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:43:36.280321 | orchestrator | 2025-06-03 15:43:36.280328 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-03 15:43:36.280334 | orchestrator | Tuesday 03 June 2025 15:41:50 +0000 (0:00:00.275) 0:00:03.523 ********** 2025-06-03 15:43:36.280341 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:43:36.280348 | orchestrator | 2025-06-03 15:43:36.280360 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-03 15:43:36.280367 | orchestrator | Tuesday 03 June 2025 15:41:50 +0000 (0:00:00.117) 0:00:03.641 ********** 2025-06-03 15:43:36.280384 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:43:36.280391 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:43:36.280420 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:43:36.280428 | orchestrator | 2025-06-03 15:43:36.280434 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-03 15:43:36.280441 | orchestrator | Tuesday 03 June 2025 15:41:50 +0000 (0:00:00.389) 0:00:04.031 ********** 2025-06-03 15:43:36.280447 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:43:36.280454 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:43:36.280461 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:43:36.280467 | orchestrator | 2025-06-03 15:43:36.280474 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-03 15:43:36.280480 | orchestrator | Tuesday 03 June 2025 15:41:50 +0000 (0:00:00.242) 0:00:04.274 ********** 2025-06-03 15:43:36.280487 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:43:36.280494 | orchestrator | 2025-06-03 15:43:36.280500 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-03 15:43:36.280507 | orchestrator | Tuesday 03 June 2025 15:41:50 +0000 (0:00:00.125) 0:00:04.399 ********** 2025-06-03 15:43:36.280514 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:43:36.280520 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:43:36.280527 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:43:36.280533 | orchestrator | 2025-06-03 15:43:36.280540 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-03 15:43:36.280546 | orchestrator | Tuesday 03 June 2025 15:41:51 +0000 (0:00:00.257) 0:00:04.656 ********** 2025-06-03 15:43:36.280553 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:43:36.280560 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:43:36.280566 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:43:36.280573 | orchestrator | 2025-06-03 15:43:36.280580 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-03 15:43:36.280586 | orchestrator | Tuesday 03 June 2025 15:41:51 +0000 (0:00:00.247) 0:00:04.904 ********** 2025-06-03 15:43:36.280593 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:43:36.280600 | orchestrator | 2025-06-03 15:43:36.280606 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-03 15:43:36.280613 | orchestrator | Tuesday 03 June 2025 15:41:51 +0000 (0:00:00.253) 0:00:05.158 ********** 2025-06-03 15:43:36.280619 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:43:36.280626 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:43:36.280633 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:43:36.280639 | orchestrator | 2025-06-03 15:43:36.280646 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-03 15:43:36.280652 | orchestrator | Tuesday 03 June 2025 15:41:51 +0000 (0:00:00.251) 0:00:05.410 ********** 2025-06-03 15:43:36.280659 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:43:36.280666 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:43:36.280672 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:43:36.280679 | orchestrator | 2025-06-03 15:43:36.280685 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-03 15:43:36.280692 | orchestrator | Tuesday 03 June 2025 15:41:52 +0000 (0:00:00.263) 0:00:05.673 ********** 2025-06-03 15:43:36.280698 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:43:36.280705 | orchestrator | 2025-06-03 15:43:36.280712 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-03 15:43:36.280718 | orchestrator | Tuesday 03 June 2025 15:41:52 +0000 (0:00:00.110) 0:00:05.784 ********** 2025-06-03 15:43:36.280725 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:43:36.280731 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:43:36.280738 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:43:36.280744 | orchestrator | 2025-06-03 15:43:36.280751 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-03 15:43:36.280758 | orchestrator | Tuesday 03 June 2025 15:41:52 +0000 (0:00:00.249) 0:00:06.034 ********** 2025-06-03 15:43:36.280765 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:43:36.280776 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:43:36.280782 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:43:36.280789 | orchestrator | 2025-06-03 15:43:36.280796 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-03 15:43:36.280802 | orchestrator | Tuesday 03 June 2025 15:41:52 +0000 (0:00:00.396) 0:00:06.430 ********** 2025-06-03 15:43:36.280809 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:43:36.280816 | orchestrator | 2025-06-03 15:43:36.280822 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-03 15:43:36.280829 | orchestrator | Tuesday 03 June 2025 15:41:53 +0000 (0:00:00.122) 0:00:06.552 ********** 2025-06-03 15:43:36.280836 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:43:36.280842 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:43:36.280849 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:43:36.280855 | orchestrator | 2025-06-03 15:43:36.280862 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-03 15:43:36.280869 | orchestrator | Tuesday 03 June 2025 15:41:53 +0000 (0:00:00.312) 0:00:06.865 ********** 2025-06-03 15:43:36.280875 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:43:36.280882 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:43:36.280888 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:43:36.280895 | orchestrator | 2025-06-03 15:43:36.280902 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-03 15:43:36.280908 | orchestrator | Tuesday 03 June 2025 15:41:53 +0000 (0:00:00.310) 0:00:07.175 ********** 2025-06-03 15:43:36.280915 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:43:36.280921 | orchestrator | 2025-06-03 15:43:36.280928 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-03 15:43:36.280935 | orchestrator | Tuesday 03 June 2025 15:41:53 +0000 (0:00:00.125) 0:00:07.301 ********** 2025-06-03 15:43:36.280941 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:43:36.280948 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:43:36.280954 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:43:36.280961 | orchestrator | 2025-06-03 15:43:36.280968 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-03 15:43:36.280978 | orchestrator | Tuesday 03 June 2025 15:41:54 +0000 (0:00:00.490) 0:00:07.792 ********** 2025-06-03 15:43:36.280985 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:43:36.280991 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:43:36.280998 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:43:36.281005 | orchestrator | 2025-06-03 15:43:36.281015 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-03 15:43:36.281022 | orchestrator | Tuesday 03 June 2025 15:41:54 +0000 (0:00:00.335) 0:00:08.127 ********** 2025-06-03 15:43:36.281028 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:43:36.281035 | orchestrator | 2025-06-03 15:43:36.281042 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-03 15:43:36.281048 | orchestrator | Tuesday 03 June 2025 15:41:54 +0000 (0:00:00.123) 0:00:08.251 ********** 2025-06-03 15:43:36.281055 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:43:36.281061 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:43:36.281068 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:43:36.281074 | orchestrator | 2025-06-03 15:43:36.281081 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-03 15:43:36.281088 | orchestrator | Tuesday 03 June 2025 15:41:55 +0000 (0:00:00.304) 0:00:08.555 ********** 2025-06-03 15:43:36.281094 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:43:36.281101 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:43:36.281108 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:43:36.281114 | orchestrator | 2025-06-03 15:43:36.281121 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-03 15:43:36.281128 | orchestrator | Tuesday 03 June 2025 15:41:55 +0000 (0:00:00.415) 0:00:08.971 ********** 2025-06-03 15:43:36.281134 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:43:36.281141 | orchestrator | 2025-06-03 15:43:36.281151 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-03 15:43:36.281157 | orchestrator | Tuesday 03 June 2025 15:41:55 +0000 (0:00:00.138) 0:00:09.109 ********** 2025-06-03 15:43:36.281164 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:43:36.281171 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:43:36.281177 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:43:36.281184 | orchestrator | 2025-06-03 15:43:36.281190 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-03 15:43:36.281197 | orchestrator | Tuesday 03 June 2025 15:41:56 +0000 (0:00:00.543) 0:00:09.653 ********** 2025-06-03 15:43:36.281204 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:43:36.281210 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:43:36.281217 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:43:36.281223 | orchestrator | 2025-06-03 15:43:36.281230 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-03 15:43:36.281237 | orchestrator | Tuesday 03 June 2025 15:41:56 +0000 (0:00:00.346) 0:00:09.999 ********** 2025-06-03 15:43:36.281243 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:43:36.281250 | orchestrator | 2025-06-03 15:43:36.281257 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-03 15:43:36.281263 | orchestrator | Tuesday 03 June 2025 15:41:56 +0000 (0:00:00.148) 0:00:10.148 ********** 2025-06-03 15:43:36.281270 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:43:36.281276 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:43:36.281283 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:43:36.281289 | orchestrator | 2025-06-03 15:43:36.281296 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-03 15:43:36.281302 | orchestrator | Tuesday 03 June 2025 15:41:56 +0000 (0:00:00.313) 0:00:10.461 ********** 2025-06-03 15:43:36.281309 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:43:36.281316 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:43:36.281322 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:43:36.281329 | orchestrator | 2025-06-03 15:43:36.281336 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-03 15:43:36.281342 | orchestrator | Tuesday 03 June 2025 15:41:57 +0000 (0:00:00.544) 0:00:11.006 ********** 2025-06-03 15:43:36.281349 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:43:36.281358 | orchestrator | 2025-06-03 15:43:36.281370 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-03 15:43:36.281382 | orchestrator | Tuesday 03 June 2025 15:41:57 +0000 (0:00:00.146) 0:00:11.153 ********** 2025-06-03 15:43:36.281393 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:43:36.281426 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:43:36.281437 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:43:36.281447 | orchestrator | 2025-06-03 15:43:36.281458 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2025-06-03 15:43:36.281469 | orchestrator | Tuesday 03 June 2025 15:41:57 +0000 (0:00:00.296) 0:00:11.449 ********** 2025-06-03 15:43:36.281478 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:43:36.281489 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:43:36.281499 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:43:36.281509 | orchestrator | 2025-06-03 15:43:36.281520 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2025-06-03 15:43:36.281530 | orchestrator | Tuesday 03 June 2025 15:41:59 +0000 (0:00:01.710) 0:00:13.160 ********** 2025-06-03 15:43:36.281541 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-06-03 15:43:36.281552 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-06-03 15:43:36.281563 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-06-03 15:43:36.281573 | orchestrator | 2025-06-03 15:43:36.281584 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2025-06-03 15:43:36.281595 | orchestrator | Tuesday 03 June 2025 15:42:01 +0000 (0:00:01.832) 0:00:14.993 ********** 2025-06-03 15:43:36.281615 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-06-03 15:43:36.281627 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-06-03 15:43:36.281638 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-06-03 15:43:36.281649 | orchestrator | 2025-06-03 15:43:36.281660 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2025-06-03 15:43:36.281679 | orchestrator | Tuesday 03 June 2025 15:42:04 +0000 (0:00:02.813) 0:00:17.807 ********** 2025-06-03 15:43:36.281695 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-06-03 15:43:36.281706 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-06-03 15:43:36.281717 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-06-03 15:43:36.281728 | orchestrator | 2025-06-03 15:43:36.281739 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2025-06-03 15:43:36.281745 | orchestrator | Tuesday 03 June 2025 15:42:06 +0000 (0:00:01.765) 0:00:19.572 ********** 2025-06-03 15:43:36.281752 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:43:36.281759 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:43:36.281765 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:43:36.281772 | orchestrator | 2025-06-03 15:43:36.281778 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2025-06-03 15:43:36.281785 | orchestrator | Tuesday 03 June 2025 15:42:06 +0000 (0:00:00.342) 0:00:19.914 ********** 2025-06-03 15:43:36.281794 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:43:36.281805 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:43:36.281819 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:43:36.281836 | orchestrator | 2025-06-03 15:43:36.281845 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-06-03 15:43:36.281855 | orchestrator | Tuesday 03 June 2025 15:42:06 +0000 (0:00:00.306) 0:00:20.221 ********** 2025-06-03 15:43:36.281865 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:43:36.281876 | orchestrator | 2025-06-03 15:43:36.281886 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2025-06-03 15:43:36.281896 | orchestrator | Tuesday 03 June 2025 15:42:07 +0000 (0:00:00.841) 0:00:21.062 ********** 2025-06-03 15:43:36.281908 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-03 15:43:36.281951 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-03 15:43:36.281966 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-03 15:43:36.281985 | orchestrator | 2025-06-03 15:43:36.281997 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2025-06-03 15:43:36.282007 | orchestrator | Tuesday 03 June 2025 15:42:09 +0000 (0:00:01.525) 0:00:22.588 ********** 2025-06-03 15:43:36.282088 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-03 15:43:36.282099 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:43:36.282111 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-03 15:43:36.282133 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:43:36.282154 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-03 15:43:36.282166 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:43:36.282177 | orchestrator | 2025-06-03 15:43:36.282188 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2025-06-03 15:43:36.282206 | orchestrator | Tuesday 03 June 2025 15:42:09 +0000 (0:00:00.672) 0:00:23.260 ********** 2025-06-03 15:43:36.282231 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-03 15:43:36.282243 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:43:36.282255 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-03 15:43:36.282274 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:43:36.282301 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-03 15:43:36.282314 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:43:36.282325 | orchestrator | 2025-06-03 15:43:36.282336 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2025-06-03 15:43:36.282346 | orchestrator | Tuesday 03 June 2025 15:42:10 +0000 (0:00:01.083) 0:00:24.343 ********** 2025-06-03 15:43:36.282358 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-03 15:43:36.282391 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-03 15:43:36.282426 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-03 15:43:36.282439 | orchestrator | 2025-06-03 15:43:36.282446 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-06-03 15:43:36.282453 | orchestrator | Tuesday 03 June 2025 15:42:12 +0000 (0:00:01.192) 0:00:25.536 ********** 2025-06-03 15:43:36.282460 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:43:36.282467 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:43:36.282474 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:43:36.282480 | orchestrator | 2025-06-03 15:43:36.282487 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-06-03 15:43:36.282494 | orchestrator | Tuesday 03 June 2025 15:42:12 +0000 (0:00:00.332) 0:00:25.869 ********** 2025-06-03 15:43:36.282501 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:43:36.282507 | orchestrator | 2025-06-03 15:43:36.282514 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2025-06-03 15:43:36.282526 | orchestrator | Tuesday 03 June 2025 15:42:13 +0000 (0:00:00.754) 0:00:26.623 ********** 2025-06-03 15:43:36.282533 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:43:36.282539 | orchestrator | 2025-06-03 15:43:36.282546 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2025-06-03 15:43:36.282556 | orchestrator | Tuesday 03 June 2025 15:42:15 +0000 (0:00:02.164) 0:00:28.788 ********** 2025-06-03 15:43:36.282563 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:43:36.282570 | orchestrator | 2025-06-03 15:43:36.282576 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2025-06-03 15:43:36.282583 | orchestrator | Tuesday 03 June 2025 15:42:17 +0000 (0:00:02.135) 0:00:30.923 ********** 2025-06-03 15:43:36.282590 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:43:36.282600 | orchestrator | 2025-06-03 15:43:36.282611 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-06-03 15:43:36.282628 | orchestrator | Tuesday 03 June 2025 15:42:33 +0000 (0:00:15.885) 0:00:46.809 ********** 2025-06-03 15:43:36.282640 | orchestrator | 2025-06-03 15:43:36.282651 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-06-03 15:43:36.282661 | orchestrator | Tuesday 03 June 2025 15:42:33 +0000 (0:00:00.068) 0:00:46.877 ********** 2025-06-03 15:43:36.282671 | orchestrator | 2025-06-03 15:43:36.282682 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-06-03 15:43:36.282693 | orchestrator | Tuesday 03 June 2025 15:42:33 +0000 (0:00:00.063) 0:00:46.940 ********** 2025-06-03 15:43:36.282704 | orchestrator | 2025-06-03 15:43:36.282714 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2025-06-03 15:43:36.282726 | orchestrator | Tuesday 03 June 2025 15:42:33 +0000 (0:00:00.074) 0:00:47.014 ********** 2025-06-03 15:43:36.282743 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:43:36.282750 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:43:36.282756 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:43:36.282763 | orchestrator | 2025-06-03 15:43:36.282769 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-03 15:43:36.282776 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2025-06-03 15:43:36.282784 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-06-03 15:43:36.282790 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-06-03 15:43:36.282797 | orchestrator | 2025-06-03 15:43:36.282803 | orchestrator | 2025-06-03 15:43:36.282810 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-03 15:43:36.282817 | orchestrator | Tuesday 03 June 2025 15:43:34 +0000 (0:01:00.989) 0:01:48.004 ********** 2025-06-03 15:43:36.282823 | orchestrator | =============================================================================== 2025-06-03 15:43:36.282830 | orchestrator | horizon : Restart horizon container ------------------------------------ 60.99s 2025-06-03 15:43:36.282836 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 15.89s 2025-06-03 15:43:36.282843 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 2.81s 2025-06-03 15:43:36.282850 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.16s 2025-06-03 15:43:36.282856 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.14s 2025-06-03 15:43:36.282863 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 1.83s 2025-06-03 15:43:36.282869 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 1.77s 2025-06-03 15:43:36.282876 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.71s 2025-06-03 15:43:36.282882 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.53s 2025-06-03 15:43:36.282889 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.19s 2025-06-03 15:43:36.282896 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 1.08s 2025-06-03 15:43:36.282902 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 0.98s 2025-06-03 15:43:36.282909 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.84s 2025-06-03 15:43:36.282915 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.75s 2025-06-03 15:43:36.282922 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.67s 2025-06-03 15:43:36.282929 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.67s 2025-06-03 15:43:36.282935 | orchestrator | horizon : Update policy file name --------------------------------------- 0.54s 2025-06-03 15:43:36.282942 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.54s 2025-06-03 15:43:36.282948 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.49s 2025-06-03 15:43:36.282955 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.45s 2025-06-03 15:43:39.330110 | orchestrator | 2025-06-03 15:43:39 | INFO  | Task d8ab598c-d92d-47d3-97dc-f1b022ff522b is in state STARTED 2025-06-03 15:43:39.331804 | orchestrator | 2025-06-03 15:43:39 | INFO  | Task af00b03c-edcd-4c4b-a35e-29d3f163948b is in state STARTED 2025-06-03 15:43:39.333093 | orchestrator | 2025-06-03 15:43:39 | INFO  | Task 3d4d01b4-8977-4bbc-825f-6acecb667a8b is in state SUCCESS 2025-06-03 15:43:39.333301 | orchestrator | 2025-06-03 15:43:39 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:43:42.384168 | orchestrator | 2025-06-03 15:43:42 | INFO  | Task d8ab598c-d92d-47d3-97dc-f1b022ff522b is in state STARTED 2025-06-03 15:43:42.384313 | orchestrator | 2025-06-03 15:43:42 | INFO  | Task af00b03c-edcd-4c4b-a35e-29d3f163948b is in state STARTED 2025-06-03 15:43:42.384333 | orchestrator | 2025-06-03 15:43:42 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:43:45.431514 | orchestrator | 2025-06-03 15:43:45 | INFO  | Task d8ab598c-d92d-47d3-97dc-f1b022ff522b is in state STARTED 2025-06-03 15:43:45.436048 | orchestrator | 2025-06-03 15:43:45 | INFO  | Task af00b03c-edcd-4c4b-a35e-29d3f163948b is in state STARTED 2025-06-03 15:43:45.436135 | orchestrator | 2025-06-03 15:43:45 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:43:48.467509 | orchestrator | 2025-06-03 15:43:48 | INFO  | Task d8ab598c-d92d-47d3-97dc-f1b022ff522b is in state STARTED 2025-06-03 15:43:48.469667 | orchestrator | 2025-06-03 15:43:48 | INFO  | Task af00b03c-edcd-4c4b-a35e-29d3f163948b is in state STARTED 2025-06-03 15:43:48.470163 | orchestrator | 2025-06-03 15:43:48 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:43:51.503228 | orchestrator | 2025-06-03 15:43:51 | INFO  | Task d8ab598c-d92d-47d3-97dc-f1b022ff522b is in state STARTED 2025-06-03 15:43:51.505189 | orchestrator | 2025-06-03 15:43:51 | INFO  | Task af00b03c-edcd-4c4b-a35e-29d3f163948b is in state STARTED 2025-06-03 15:43:51.505558 | orchestrator | 2025-06-03 15:43:51 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:43:54.553045 | orchestrator | 2025-06-03 15:43:54 | INFO  | Task d8ab598c-d92d-47d3-97dc-f1b022ff522b is in state STARTED 2025-06-03 15:43:54.554462 | orchestrator | 2025-06-03 15:43:54 | INFO  | Task af00b03c-edcd-4c4b-a35e-29d3f163948b is in state STARTED 2025-06-03 15:43:54.554539 | orchestrator | 2025-06-03 15:43:54 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:43:57.589198 | orchestrator | 2025-06-03 15:43:57 | INFO  | Task d8ab598c-d92d-47d3-97dc-f1b022ff522b is in state STARTED 2025-06-03 15:43:57.592285 | orchestrator | 2025-06-03 15:43:57 | INFO  | Task af00b03c-edcd-4c4b-a35e-29d3f163948b is in state STARTED 2025-06-03 15:43:57.592489 | orchestrator | 2025-06-03 15:43:57 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:44:00.638304 | orchestrator | 2025-06-03 15:44:00 | INFO  | Task d8ab598c-d92d-47d3-97dc-f1b022ff522b is in state STARTED 2025-06-03 15:44:00.639739 | orchestrator | 2025-06-03 15:44:00 | INFO  | Task af00b03c-edcd-4c4b-a35e-29d3f163948b is in state STARTED 2025-06-03 15:44:00.640298 | orchestrator | 2025-06-03 15:44:00 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:44:03.688467 | orchestrator | 2025-06-03 15:44:03 | INFO  | Task d8ab598c-d92d-47d3-97dc-f1b022ff522b is in state STARTED 2025-06-03 15:44:03.692422 | orchestrator | 2025-06-03 15:44:03 | INFO  | Task af00b03c-edcd-4c4b-a35e-29d3f163948b is in state STARTED 2025-06-03 15:44:03.692485 | orchestrator | 2025-06-03 15:44:03 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:44:06.736021 | orchestrator | 2025-06-03 15:44:06 | INFO  | Task d8ab598c-d92d-47d3-97dc-f1b022ff522b is in state STARTED 2025-06-03 15:44:06.737986 | orchestrator | 2025-06-03 15:44:06 | INFO  | Task af00b03c-edcd-4c4b-a35e-29d3f163948b is in state STARTED 2025-06-03 15:44:06.738407 | orchestrator | 2025-06-03 15:44:06 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:44:09.782582 | orchestrator | 2025-06-03 15:44:09 | INFO  | Task d8ab598c-d92d-47d3-97dc-f1b022ff522b is in state STARTED 2025-06-03 15:44:09.783740 | orchestrator | 2025-06-03 15:44:09 | INFO  | Task af00b03c-edcd-4c4b-a35e-29d3f163948b is in state STARTED 2025-06-03 15:44:09.783831 | orchestrator | 2025-06-03 15:44:09 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:44:12.832465 | orchestrator | 2025-06-03 15:44:12 | INFO  | Task d8ab598c-d92d-47d3-97dc-f1b022ff522b is in state STARTED 2025-06-03 15:44:12.833796 | orchestrator | 2025-06-03 15:44:12 | INFO  | Task af00b03c-edcd-4c4b-a35e-29d3f163948b is in state STARTED 2025-06-03 15:44:12.833952 | orchestrator | 2025-06-03 15:44:12 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:44:15.878534 | orchestrator | 2025-06-03 15:44:15 | INFO  | Task d8ab598c-d92d-47d3-97dc-f1b022ff522b is in state STARTED 2025-06-03 15:44:15.879284 | orchestrator | 2025-06-03 15:44:15 | INFO  | Task af00b03c-edcd-4c4b-a35e-29d3f163948b is in state STARTED 2025-06-03 15:44:15.879330 | orchestrator | 2025-06-03 15:44:15 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:44:18.922861 | orchestrator | 2025-06-03 15:44:18 | INFO  | Task d8ab598c-d92d-47d3-97dc-f1b022ff522b is in state STARTED 2025-06-03 15:44:18.924319 | orchestrator | 2025-06-03 15:44:18 | INFO  | Task af00b03c-edcd-4c4b-a35e-29d3f163948b is in state STARTED 2025-06-03 15:44:18.924418 | orchestrator | 2025-06-03 15:44:18 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:44:21.972028 | orchestrator | 2025-06-03 15:44:21 | INFO  | Task d8ab598c-d92d-47d3-97dc-f1b022ff522b is in state STARTED 2025-06-03 15:44:21.974348 | orchestrator | 2025-06-03 15:44:21 | INFO  | Task af00b03c-edcd-4c4b-a35e-29d3f163948b is in state STARTED 2025-06-03 15:44:21.974432 | orchestrator | 2025-06-03 15:44:21 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:44:25.027724 | orchestrator | 2025-06-03 15:44:25 | INFO  | Task d8ab598c-d92d-47d3-97dc-f1b022ff522b is in state STARTED 2025-06-03 15:44:25.029329 | orchestrator | 2025-06-03 15:44:25 | INFO  | Task af00b03c-edcd-4c4b-a35e-29d3f163948b is in state STARTED 2025-06-03 15:44:25.029387 | orchestrator | 2025-06-03 15:44:25 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:44:28.077192 | orchestrator | 2025-06-03 15:44:28 | INFO  | Task d8ab598c-d92d-47d3-97dc-f1b022ff522b is in state STARTED 2025-06-03 15:44:28.078788 | orchestrator | 2025-06-03 15:44:28 | INFO  | Task af00b03c-edcd-4c4b-a35e-29d3f163948b is in state STARTED 2025-06-03 15:44:28.078836 | orchestrator | 2025-06-03 15:44:28 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:44:31.125949 | orchestrator | 2025-06-03 15:44:31.126443 | orchestrator | 2025-06-03 15:44:31.126459 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2025-06-03 15:44:31.126467 | orchestrator | 2025-06-03 15:44:31.126475 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2025-06-03 15:44:31.126483 | orchestrator | Tuesday 03 June 2025 15:43:12 +0000 (0:00:00.158) 0:00:00.158 ********** 2025-06-03 15:44:31.126490 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2025-06-03 15:44:31.126499 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-06-03 15:44:31.126506 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-06-03 15:44:31.126513 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2025-06-03 15:44:31.126520 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-06-03 15:44:31.126528 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2025-06-03 15:44:31.126535 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2025-06-03 15:44:31.126542 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2025-06-03 15:44:31.126571 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2025-06-03 15:44:31.126579 | orchestrator | 2025-06-03 15:44:31.126585 | orchestrator | TASK [Create share directory] ************************************************** 2025-06-03 15:44:31.126592 | orchestrator | Tuesday 03 June 2025 15:43:16 +0000 (0:00:04.208) 0:00:04.366 ********** 2025-06-03 15:44:31.126600 | orchestrator | changed: [testbed-manager -> localhost] 2025-06-03 15:44:31.126607 | orchestrator | 2025-06-03 15:44:31.126614 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2025-06-03 15:44:31.126621 | orchestrator | Tuesday 03 June 2025 15:43:17 +0000 (0:00:00.967) 0:00:05.334 ********** 2025-06-03 15:44:31.126628 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2025-06-03 15:44:31.126635 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-06-03 15:44:31.126642 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-06-03 15:44:31.126649 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2025-06-03 15:44:31.126657 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-06-03 15:44:31.126664 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2025-06-03 15:44:31.126671 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2025-06-03 15:44:31.126678 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2025-06-03 15:44:31.126685 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2025-06-03 15:44:31.126692 | orchestrator | 2025-06-03 15:44:31.126699 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2025-06-03 15:44:31.126705 | orchestrator | Tuesday 03 June 2025 15:43:30 +0000 (0:00:13.119) 0:00:18.454 ********** 2025-06-03 15:44:31.126713 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2025-06-03 15:44:31.126720 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-06-03 15:44:31.126727 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-06-03 15:44:31.126734 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2025-06-03 15:44:31.126741 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-06-03 15:44:31.126748 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2025-06-03 15:44:31.126768 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2025-06-03 15:44:31.126774 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2025-06-03 15:44:31.126781 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2025-06-03 15:44:31.126787 | orchestrator | 2025-06-03 15:44:31.126794 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-03 15:44:31.126801 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-03 15:44:31.126809 | orchestrator | 2025-06-03 15:44:31.126816 | orchestrator | 2025-06-03 15:44:31.126822 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-03 15:44:31.126828 | orchestrator | Tuesday 03 June 2025 15:43:37 +0000 (0:00:06.404) 0:00:24.858 ********** 2025-06-03 15:44:31.126835 | orchestrator | =============================================================================== 2025-06-03 15:44:31.126841 | orchestrator | Write ceph keys to the share directory --------------------------------- 13.12s 2025-06-03 15:44:31.126848 | orchestrator | Write ceph keys to the configuration directory -------------------------- 6.40s 2025-06-03 15:44:31.126854 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.21s 2025-06-03 15:44:31.126866 | orchestrator | Create share directory -------------------------------------------------- 0.97s 2025-06-03 15:44:31.126873 | orchestrator | 2025-06-03 15:44:31.126879 | orchestrator | 2025-06-03 15:44:31.126885 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-03 15:44:31.126892 | orchestrator | 2025-06-03 15:44:31.126937 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-03 15:44:31.126944 | orchestrator | Tuesday 03 June 2025 15:41:46 +0000 (0:00:00.238) 0:00:00.238 ********** 2025-06-03 15:44:31.126950 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:44:31.126957 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:44:31.126964 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:44:31.126970 | orchestrator | 2025-06-03 15:44:31.126977 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-03 15:44:31.126984 | orchestrator | Tuesday 03 June 2025 15:41:47 +0000 (0:00:00.250) 0:00:00.488 ********** 2025-06-03 15:44:31.126991 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-06-03 15:44:31.126999 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-06-03 15:44:31.127006 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-06-03 15:44:31.127013 | orchestrator | 2025-06-03 15:44:31.127020 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2025-06-03 15:44:31.127026 | orchestrator | 2025-06-03 15:44:31.127033 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-06-03 15:44:31.127040 | orchestrator | Tuesday 03 June 2025 15:41:47 +0000 (0:00:00.400) 0:00:00.889 ********** 2025-06-03 15:44:31.127047 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:44:31.127054 | orchestrator | 2025-06-03 15:44:31.127061 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2025-06-03 15:44:31.127068 | orchestrator | Tuesday 03 June 2025 15:41:47 +0000 (0:00:00.500) 0:00:01.389 ********** 2025-06-03 15:44:31.127079 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-03 15:44:31.127094 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-03 15:44:31.127126 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-03 15:44:31.127135 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-03 15:44:31.127144 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-03 15:44:31.127151 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-03 15:44:31.127160 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-03 15:44:31.127171 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-03 15:44:31.127185 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-03 15:44:31.127192 | orchestrator | 2025-06-03 15:44:31.127199 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2025-06-03 15:44:31.127206 | orchestrator | Tuesday 03 June 2025 15:41:49 +0000 (0:00:01.616) 0:00:03.006 ********** 2025-06-03 15:44:31.127213 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=/opt/configuration/environments/kolla/files/overlays/keystone/policy.yaml) 2025-06-03 15:44:31.127221 | orchestrator | 2025-06-03 15:44:31.127227 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2025-06-03 15:44:31.127253 | orchestrator | Tuesday 03 June 2025 15:41:50 +0000 (0:00:00.770) 0:00:03.776 ********** 2025-06-03 15:44:31.127261 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:44:31.127268 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:44:31.127274 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:44:31.127280 | orchestrator | 2025-06-03 15:44:31.127286 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2025-06-03 15:44:31.127293 | orchestrator | Tuesday 03 June 2025 15:41:50 +0000 (0:00:00.391) 0:00:04.168 ********** 2025-06-03 15:44:31.127299 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-03 15:44:31.127306 | orchestrator | 2025-06-03 15:44:31.127312 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-06-03 15:44:31.127318 | orchestrator | Tuesday 03 June 2025 15:41:51 +0000 (0:00:00.608) 0:00:04.776 ********** 2025-06-03 15:44:31.127325 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:44:31.127331 | orchestrator | 2025-06-03 15:44:31.127337 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2025-06-03 15:44:31.127344 | orchestrator | Tuesday 03 June 2025 15:41:51 +0000 (0:00:00.470) 0:00:05.246 ********** 2025-06-03 15:44:31.127367 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-03 15:44:31.127375 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-03 15:44:31.127391 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-03 15:44:31.127405 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-03 15:44:31.127412 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-03 15:44:31.127419 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-03 15:44:31.127425 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-03 15:44:31.127436 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-03 15:44:31.127446 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-03 15:44:31.127454 | orchestrator | 2025-06-03 15:44:31.127461 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2025-06-03 15:44:31.127467 | orchestrator | Tuesday 03 June 2025 15:41:55 +0000 (0:00:03.396) 0:00:08.643 ********** 2025-06-03 15:44:31.127481 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-03 15:44:31.127488 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-03 15:44:31.127495 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-03 15:44:31.127502 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:44:31.127509 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-03 15:44:31.127525 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-03 15:44:31.127532 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-03 15:44:31.127538 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:44:31.127551 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-03 15:44:31.127558 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-03 15:44:31.127565 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-03 15:44:31.127581 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:44:31.127588 | orchestrator | 2025-06-03 15:44:31.127594 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2025-06-03 15:44:31.127601 | orchestrator | Tuesday 03 June 2025 15:41:55 +0000 (0:00:00.706) 0:00:09.349 ********** 2025-06-03 15:44:31.127610 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-03 15:44:31.127617 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-03 15:44:31.127628 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-03 15:44:31.127635 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:44:31.127642 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-03 15:44:31.127653 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-03 15:44:31.127671 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-03 15:44:31.127677 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:44:31.127687 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-03 15:44:31.127699 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/koll2025-06-03 15:44:31 | INFO  | Task d8ab598c-d92d-47d3-97dc-f1b022ff522b is in state SUCCESS 2025-06-03 15:44:31.127706 | orchestrator | a/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-03 15:44:31.127714 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-03 15:44:31.127720 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:44:31.127731 | orchestrator | 2025-06-03 15:44:31.127737 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2025-06-03 15:44:31.127744 | orchestrator | Tuesday 03 June 2025 15:41:56 +0000 (0:00:00.800) 0:00:10.150 ********** 2025-06-03 15:44:31.127751 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-03 15:44:31.127761 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-03 15:44:31.127774 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-03 15:44:31.127781 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-03 15:44:31.127788 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-03 15:44:31.127799 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-03 15:44:31.127806 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-03 15:44:31.127815 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-03 15:44:31.127822 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-03 15:44:31.127829 | orchestrator | 2025-06-03 15:44:31.127836 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2025-06-03 15:44:31.127842 | orchestrator | Tuesday 03 June 2025 15:42:00 +0000 (0:00:03.874) 0:00:14.025 ********** 2025-06-03 15:44:31.127854 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-03 15:44:31.127865 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-03 15:44:31.127872 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-03 15:44:31.127882 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-03 15:44:31.127893 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-03 15:44:31.127900 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-03 15:44:31.127911 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-03 15:44:31.127918 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-03 15:44:31.127925 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-03 15:44:31.127931 | orchestrator | 2025-06-03 15:44:31.127938 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2025-06-03 15:44:31.127944 | orchestrator | Tuesday 03 June 2025 15:42:06 +0000 (0:00:05.433) 0:00:19.458 ********** 2025-06-03 15:44:31.127950 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:44:31.127957 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:44:31.127963 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:44:31.127970 | orchestrator | 2025-06-03 15:44:31.127976 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2025-06-03 15:44:31.127986 | orchestrator | Tuesday 03 June 2025 15:42:07 +0000 (0:00:01.443) 0:00:20.902 ********** 2025-06-03 15:44:31.127992 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:44:31.127998 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:44:31.128005 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:44:31.128011 | orchestrator | 2025-06-03 15:44:31.128017 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2025-06-03 15:44:31.128024 | orchestrator | Tuesday 03 June 2025 15:42:08 +0000 (0:00:00.667) 0:00:21.570 ********** 2025-06-03 15:44:31.128030 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:44:31.128036 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:44:31.128043 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:44:31.128049 | orchestrator | 2025-06-03 15:44:31.128055 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2025-06-03 15:44:31.128062 | orchestrator | Tuesday 03 June 2025 15:42:08 +0000 (0:00:00.506) 0:00:22.076 ********** 2025-06-03 15:44:31.128068 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:44:31.128074 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:44:31.128080 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:44:31.128087 | orchestrator | 2025-06-03 15:44:31.128093 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2025-06-03 15:44:31.128099 | orchestrator | Tuesday 03 June 2025 15:42:08 +0000 (0:00:00.316) 0:00:22.393 ********** 2025-06-03 15:44:31.128116 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-03 15:44:31.128123 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-03 15:44:31.128130 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-03 15:44:31.128140 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-03 15:44:31.128147 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-03 15:44:31.128162 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-03 15:44:31.128169 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-03 15:44:31.128176 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-03 15:44:31.128182 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-03 15:44:31.128189 | orchestrator | 2025-06-03 15:44:31.128195 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-06-03 15:44:31.128202 | orchestrator | Tuesday 03 June 2025 15:42:11 +0000 (0:00:02.686) 0:00:25.079 ********** 2025-06-03 15:44:31.128208 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:44:31.128214 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:44:31.128220 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:44:31.128227 | orchestrator | 2025-06-03 15:44:31.128233 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2025-06-03 15:44:31.128240 | orchestrator | Tuesday 03 June 2025 15:42:11 +0000 (0:00:00.287) 0:00:25.367 ********** 2025-06-03 15:44:31.128246 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-06-03 15:44:31.128258 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-06-03 15:44:31.128265 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-06-03 15:44:31.128272 | orchestrator | 2025-06-03 15:44:31.128283 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2025-06-03 15:44:31.128290 | orchestrator | Tuesday 03 June 2025 15:42:14 +0000 (0:00:02.109) 0:00:27.477 ********** 2025-06-03 15:44:31.128297 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-03 15:44:31.128303 | orchestrator | 2025-06-03 15:44:31.128309 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2025-06-03 15:44:31.128315 | orchestrator | Tuesday 03 June 2025 15:42:14 +0000 (0:00:00.951) 0:00:28.429 ********** 2025-06-03 15:44:31.128321 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:44:31.128328 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:44:31.128335 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:44:31.128342 | orchestrator | 2025-06-03 15:44:31.128364 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2025-06-03 15:44:31.128371 | orchestrator | Tuesday 03 June 2025 15:42:15 +0000 (0:00:00.542) 0:00:28.971 ********** 2025-06-03 15:44:31.128377 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-06-03 15:44:31.128383 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-06-03 15:44:31.128390 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-03 15:44:31.128397 | orchestrator | 2025-06-03 15:44:31.128404 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2025-06-03 15:44:31.128411 | orchestrator | Tuesday 03 June 2025 15:42:16 +0000 (0:00:01.030) 0:00:30.002 ********** 2025-06-03 15:44:31.128423 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:44:31.128431 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:44:31.128437 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:44:31.128443 | orchestrator | 2025-06-03 15:44:31.128450 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2025-06-03 15:44:31.128456 | orchestrator | Tuesday 03 June 2025 15:42:16 +0000 (0:00:00.313) 0:00:30.316 ********** 2025-06-03 15:44:31.128463 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-06-03 15:44:31.128469 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-06-03 15:44:31.128475 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-06-03 15:44:31.128482 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-06-03 15:44:31.128488 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-06-03 15:44:31.128495 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-06-03 15:44:31.128501 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-06-03 15:44:31.128508 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-06-03 15:44:31.128514 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-06-03 15:44:31.128521 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-06-03 15:44:31.128527 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-06-03 15:44:31.128533 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-06-03 15:44:31.128540 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-06-03 15:44:31.128546 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-06-03 15:44:31.128553 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-06-03 15:44:31.128559 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-06-03 15:44:31.128565 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-06-03 15:44:31.128577 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-06-03 15:44:31.128583 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-06-03 15:44:31.128590 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-06-03 15:44:31.128596 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-06-03 15:44:31.128602 | orchestrator | 2025-06-03 15:44:31.128609 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2025-06-03 15:44:31.128615 | orchestrator | Tuesday 03 June 2025 15:42:25 +0000 (0:00:08.806) 0:00:39.122 ********** 2025-06-03 15:44:31.128622 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-06-03 15:44:31.128628 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-06-03 15:44:31.128634 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-06-03 15:44:31.128640 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-06-03 15:44:31.128647 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-06-03 15:44:31.128656 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-06-03 15:44:31.128663 | orchestrator | 2025-06-03 15:44:31.128669 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2025-06-03 15:44:31.128676 | orchestrator | Tuesday 03 June 2025 15:42:28 +0000 (0:00:02.526) 0:00:41.648 ********** 2025-06-03 15:44:31.128687 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-03 15:44:31.128695 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-03 15:44:31.128702 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-03 15:44:31.128714 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-03 15:44:31.128721 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-03 15:44:31.128728 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-03 15:44:31.128738 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-03 15:44:31.128745 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-03 15:44:31.128752 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-03 15:44:31.128763 | orchestrator | 2025-06-03 15:44:31.128769 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-06-03 15:44:31.128776 | orchestrator | Tuesday 03 June 2025 15:42:30 +0000 (0:00:02.238) 0:00:43.886 ********** 2025-06-03 15:44:31.128782 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:44:31.128788 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:44:31.128795 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:44:31.128801 | orchestrator | 2025-06-03 15:44:31.128807 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2025-06-03 15:44:31.128814 | orchestrator | Tuesday 03 June 2025 15:42:30 +0000 (0:00:00.308) 0:00:44.195 ********** 2025-06-03 15:44:31.128820 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:44:31.128826 | orchestrator | 2025-06-03 15:44:31.128832 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2025-06-03 15:44:31.128911 | orchestrator | Tuesday 03 June 2025 15:42:32 +0000 (0:00:02.186) 0:00:46.382 ********** 2025-06-03 15:44:31.128929 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:44:31.128935 | orchestrator | 2025-06-03 15:44:31.128940 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2025-06-03 15:44:31.128946 | orchestrator | Tuesday 03 June 2025 15:42:35 +0000 (0:00:02.637) 0:00:49.019 ********** 2025-06-03 15:44:31.128952 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:44:31.128958 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:44:31.128964 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:44:31.128969 | orchestrator | 2025-06-03 15:44:31.128975 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2025-06-03 15:44:31.128981 | orchestrator | Tuesday 03 June 2025 15:42:36 +0000 (0:00:00.851) 0:00:49.871 ********** 2025-06-03 15:44:31.128987 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:44:31.128993 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:44:31.128998 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:44:31.129004 | orchestrator | 2025-06-03 15:44:31.129013 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2025-06-03 15:44:31.129019 | orchestrator | Tuesday 03 June 2025 15:42:36 +0000 (0:00:00.315) 0:00:50.186 ********** 2025-06-03 15:44:31.129025 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:44:31.129030 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:44:31.129036 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:44:31.129042 | orchestrator | 2025-06-03 15:44:31.129048 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2025-06-03 15:44:31.129053 | orchestrator | Tuesday 03 June 2025 15:42:37 +0000 (0:00:00.327) 0:00:50.514 ********** 2025-06-03 15:44:31.129059 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:44:31.129065 | orchestrator | 2025-06-03 15:44:31.129071 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2025-06-03 15:44:31.129076 | orchestrator | Tuesday 03 June 2025 15:42:51 +0000 (0:00:14.061) 0:01:04.575 ********** 2025-06-03 15:44:31.129082 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:44:31.129088 | orchestrator | 2025-06-03 15:44:31.129094 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-06-03 15:44:31.129099 | orchestrator | Tuesday 03 June 2025 15:43:01 +0000 (0:00:10.123) 0:01:14.698 ********** 2025-06-03 15:44:31.129105 | orchestrator | 2025-06-03 15:44:31.129111 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-06-03 15:44:31.129117 | orchestrator | Tuesday 03 June 2025 15:43:01 +0000 (0:00:00.254) 0:01:14.952 ********** 2025-06-03 15:44:31.129122 | orchestrator | 2025-06-03 15:44:31.129133 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-06-03 15:44:31.129143 | orchestrator | Tuesday 03 June 2025 15:43:01 +0000 (0:00:00.059) 0:01:15.012 ********** 2025-06-03 15:44:31.129149 | orchestrator | 2025-06-03 15:44:31.129155 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2025-06-03 15:44:31.129160 | orchestrator | Tuesday 03 June 2025 15:43:01 +0000 (0:00:00.066) 0:01:15.079 ********** 2025-06-03 15:44:31.129166 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:44:31.129172 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:44:31.129178 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:44:31.129184 | orchestrator | 2025-06-03 15:44:31.129189 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2025-06-03 15:44:31.129195 | orchestrator | Tuesday 03 June 2025 15:43:24 +0000 (0:00:22.900) 0:01:37.980 ********** 2025-06-03 15:44:31.129201 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:44:31.129207 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:44:31.129212 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:44:31.129218 | orchestrator | 2025-06-03 15:44:31.129224 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2025-06-03 15:44:31.129230 | orchestrator | Tuesday 03 June 2025 15:43:30 +0000 (0:00:05.462) 0:01:43.442 ********** 2025-06-03 15:44:31.129236 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:44:31.129241 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:44:31.129247 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:44:31.129253 | orchestrator | 2025-06-03 15:44:31.129259 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-06-03 15:44:31.129265 | orchestrator | Tuesday 03 June 2025 15:43:41 +0000 (0:00:11.328) 0:01:54.771 ********** 2025-06-03 15:44:31.129270 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:44:31.129276 | orchestrator | 2025-06-03 15:44:31.129282 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2025-06-03 15:44:31.129288 | orchestrator | Tuesday 03 June 2025 15:43:42 +0000 (0:00:00.772) 0:01:55.543 ********** 2025-06-03 15:44:31.129293 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:44:31.129299 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:44:31.129305 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:44:31.129311 | orchestrator | 2025-06-03 15:44:31.129317 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2025-06-03 15:44:31.129323 | orchestrator | Tuesday 03 June 2025 15:43:42 +0000 (0:00:00.789) 0:01:56.332 ********** 2025-06-03 15:44:31.129328 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:44:31.129334 | orchestrator | 2025-06-03 15:44:31.129340 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2025-06-03 15:44:31.129346 | orchestrator | Tuesday 03 June 2025 15:43:44 +0000 (0:00:01.720) 0:01:58.053 ********** 2025-06-03 15:44:31.129383 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2025-06-03 15:44:31.129390 | orchestrator | 2025-06-03 15:44:31.129395 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2025-06-03 15:44:31.129401 | orchestrator | Tuesday 03 June 2025 15:43:55 +0000 (0:00:11.381) 0:02:09.434 ********** 2025-06-03 15:44:31.129407 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2025-06-03 15:44:31.129413 | orchestrator | 2025-06-03 15:44:31.129419 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2025-06-03 15:44:31.129425 | orchestrator | Tuesday 03 June 2025 15:44:18 +0000 (0:00:22.857) 0:02:32.292 ********** 2025-06-03 15:44:31.129430 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2025-06-03 15:44:31.129436 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2025-06-03 15:44:31.129442 | orchestrator | 2025-06-03 15:44:31.129448 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2025-06-03 15:44:31.129454 | orchestrator | Tuesday 03 June 2025 15:44:25 +0000 (0:00:06.760) 0:02:39.053 ********** 2025-06-03 15:44:31.129464 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:44:31.129470 | orchestrator | 2025-06-03 15:44:31.129475 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2025-06-03 15:44:31.129481 | orchestrator | Tuesday 03 June 2025 15:44:25 +0000 (0:00:00.330) 0:02:39.383 ********** 2025-06-03 15:44:31.129487 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:44:31.129493 | orchestrator | 2025-06-03 15:44:31.129498 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2025-06-03 15:44:31.129504 | orchestrator | Tuesday 03 June 2025 15:44:26 +0000 (0:00:00.134) 0:02:39.518 ********** 2025-06-03 15:44:31.129514 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:44:31.129520 | orchestrator | 2025-06-03 15:44:31.129525 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2025-06-03 15:44:31.129531 | orchestrator | Tuesday 03 June 2025 15:44:26 +0000 (0:00:00.130) 0:02:39.648 ********** 2025-06-03 15:44:31.129537 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:44:31.129543 | orchestrator | 2025-06-03 15:44:31.129549 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2025-06-03 15:44:31.129554 | orchestrator | Tuesday 03 June 2025 15:44:26 +0000 (0:00:00.346) 0:02:39.995 ********** 2025-06-03 15:44:31.129560 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:44:31.129566 | orchestrator | 2025-06-03 15:44:31.129572 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-06-03 15:44:31.129577 | orchestrator | Tuesday 03 June 2025 15:44:29 +0000 (0:00:03.142) 0:02:43.137 ********** 2025-06-03 15:44:31.129583 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:44:31.129589 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:44:31.129595 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:44:31.129600 | orchestrator | 2025-06-03 15:44:31.129606 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-03 15:44:31.129613 | orchestrator | testbed-node-0 : ok=36  changed=20  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-06-03 15:44:31.129619 | orchestrator | testbed-node-1 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-06-03 15:44:31.129629 | orchestrator | testbed-node-2 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-06-03 15:44:31.129635 | orchestrator | 2025-06-03 15:44:31.129640 | orchestrator | 2025-06-03 15:44:31.129646 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-03 15:44:31.129652 | orchestrator | Tuesday 03 June 2025 15:44:30 +0000 (0:00:00.646) 0:02:43.784 ********** 2025-06-03 15:44:31.129658 | orchestrator | =============================================================================== 2025-06-03 15:44:31.129664 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 22.90s 2025-06-03 15:44:31.129669 | orchestrator | service-ks-register : keystone | Creating services --------------------- 22.86s 2025-06-03 15:44:31.129675 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 14.06s 2025-06-03 15:44:31.129681 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 11.38s 2025-06-03 15:44:31.129687 | orchestrator | keystone : Restart keystone container ---------------------------------- 11.33s 2025-06-03 15:44:31.129692 | orchestrator | keystone : Running Keystone fernet bootstrap container ----------------- 10.12s 2025-06-03 15:44:31.129698 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 8.81s 2025-06-03 15:44:31.129704 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 6.76s 2025-06-03 15:44:31.129709 | orchestrator | keystone : Restart keystone-fernet container ---------------------------- 5.46s 2025-06-03 15:44:31.129715 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 5.43s 2025-06-03 15:44:31.129721 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.87s 2025-06-03 15:44:31.129731 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.40s 2025-06-03 15:44:31.129737 | orchestrator | keystone : Creating default user role ----------------------------------- 3.14s 2025-06-03 15:44:31.129743 | orchestrator | keystone : Copying over existing policy file ---------------------------- 2.69s 2025-06-03 15:44:31.129749 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.64s 2025-06-03 15:44:31.129754 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.53s 2025-06-03 15:44:31.129760 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.24s 2025-06-03 15:44:31.129766 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.19s 2025-06-03 15:44:31.129772 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 2.11s 2025-06-03 15:44:31.129777 | orchestrator | keystone : Run key distribution ----------------------------------------- 1.72s 2025-06-03 15:44:31.129783 | orchestrator | 2025-06-03 15:44:31 | INFO  | Task af00b03c-edcd-4c4b-a35e-29d3f163948b is in state STARTED 2025-06-03 15:44:31.129789 | orchestrator | 2025-06-03 15:44:31 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:44:34.150505 | orchestrator | 2025-06-03 15:44:34 | INFO  | Task fcce477d-3756-4483-afd3-a81142aa777b is in state STARTED 2025-06-03 15:44:34.151639 | orchestrator | 2025-06-03 15:44:34 | INFO  | Task af00b03c-edcd-4c4b-a35e-29d3f163948b is in state SUCCESS 2025-06-03 15:44:34.155864 | orchestrator | 2025-06-03 15:44:34 | INFO  | Task 8c4e3a54-4480-4c9d-8534-5df5ce79864b is in state STARTED 2025-06-03 15:44:34.155926 | orchestrator | 2025-06-03 15:44:34 | INFO  | Task 780b8089-27c0-4c67-875c-1dfcad8ac922 is in state STARTED 2025-06-03 15:44:34.156312 | orchestrator | 2025-06-03 15:44:34 | INFO  | Task 400485f3-2769-46f4-9849-939b73c51b8d is in state STARTED 2025-06-03 15:44:34.157409 | orchestrator | 2025-06-03 15:44:34 | INFO  | Task 06bf7594-82cc-4f39-a568-16db6170ae64 is in state STARTED 2025-06-03 15:44:34.157453 | orchestrator | 2025-06-03 15:44:34 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:44:37.185949 | orchestrator | 2025-06-03 15:44:37 | INFO  | Task fcce477d-3756-4483-afd3-a81142aa777b is in state STARTED 2025-06-03 15:44:37.186185 | orchestrator | 2025-06-03 15:44:37 | INFO  | Task 8c4e3a54-4480-4c9d-8534-5df5ce79864b is in state STARTED 2025-06-03 15:44:37.186214 | orchestrator | 2025-06-03 15:44:37 | INFO  | Task 780b8089-27c0-4c67-875c-1dfcad8ac922 is in state STARTED 2025-06-03 15:44:37.187048 | orchestrator | 2025-06-03 15:44:37 | INFO  | Task 400485f3-2769-46f4-9849-939b73c51b8d is in state STARTED 2025-06-03 15:44:37.187817 | orchestrator | 2025-06-03 15:44:37 | INFO  | Task 06bf7594-82cc-4f39-a568-16db6170ae64 is in state STARTED 2025-06-03 15:44:37.187858 | orchestrator | 2025-06-03 15:44:37 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:44:40.210153 | orchestrator | 2025-06-03 15:44:40 | INFO  | Task fcce477d-3756-4483-afd3-a81142aa777b is in state STARTED 2025-06-03 15:44:40.210704 | orchestrator | 2025-06-03 15:44:40 | INFO  | Task 8c4e3a54-4480-4c9d-8534-5df5ce79864b is in state STARTED 2025-06-03 15:44:40.213285 | orchestrator | 2025-06-03 15:44:40 | INFO  | Task 780b8089-27c0-4c67-875c-1dfcad8ac922 is in state STARTED 2025-06-03 15:44:40.215234 | orchestrator | 2025-06-03 15:44:40 | INFO  | Task 400485f3-2769-46f4-9849-939b73c51b8d is in state STARTED 2025-06-03 15:44:40.217547 | orchestrator | 2025-06-03 15:44:40 | INFO  | Task 06bf7594-82cc-4f39-a568-16db6170ae64 is in state STARTED 2025-06-03 15:44:40.217853 | orchestrator | 2025-06-03 15:44:40 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:44:43.258307 | orchestrator | 2025-06-03 15:44:43 | INFO  | Task fcce477d-3756-4483-afd3-a81142aa777b is in state STARTED 2025-06-03 15:44:43.260052 | orchestrator | 2025-06-03 15:44:43 | INFO  | Task 8c4e3a54-4480-4c9d-8534-5df5ce79864b is in state STARTED 2025-06-03 15:44:43.261527 | orchestrator | 2025-06-03 15:44:43 | INFO  | Task 780b8089-27c0-4c67-875c-1dfcad8ac922 is in state STARTED 2025-06-03 15:44:43.262980 | orchestrator | 2025-06-03 15:44:43 | INFO  | Task 400485f3-2769-46f4-9849-939b73c51b8d is in state STARTED 2025-06-03 15:44:43.264164 | orchestrator | 2025-06-03 15:44:43 | INFO  | Task 06bf7594-82cc-4f39-a568-16db6170ae64 is in state STARTED 2025-06-03 15:44:43.264203 | orchestrator | 2025-06-03 15:44:43 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:44:46.307578 | orchestrator | 2025-06-03 15:44:46 | INFO  | Task fcce477d-3756-4483-afd3-a81142aa777b is in state STARTED 2025-06-03 15:44:46.309394 | orchestrator | 2025-06-03 15:44:46 | INFO  | Task 8c4e3a54-4480-4c9d-8534-5df5ce79864b is in state STARTED 2025-06-03 15:44:46.312399 | orchestrator | 2025-06-03 15:44:46 | INFO  | Task 780b8089-27c0-4c67-875c-1dfcad8ac922 is in state STARTED 2025-06-03 15:44:46.312468 | orchestrator | 2025-06-03 15:44:46 | INFO  | Task 400485f3-2769-46f4-9849-939b73c51b8d is in state STARTED 2025-06-03 15:44:46.315586 | orchestrator | 2025-06-03 15:44:46 | INFO  | Task 06bf7594-82cc-4f39-a568-16db6170ae64 is in state STARTED 2025-06-03 15:44:46.315641 | orchestrator | 2025-06-03 15:44:46 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:44:49.352887 | orchestrator | 2025-06-03 15:44:49 | INFO  | Task fcce477d-3756-4483-afd3-a81142aa777b is in state STARTED 2025-06-03 15:44:49.354571 | orchestrator | 2025-06-03 15:44:49 | INFO  | Task 8c4e3a54-4480-4c9d-8534-5df5ce79864b is in state STARTED 2025-06-03 15:44:49.356381 | orchestrator | 2025-06-03 15:44:49 | INFO  | Task 780b8089-27c0-4c67-875c-1dfcad8ac922 is in state STARTED 2025-06-03 15:44:49.358144 | orchestrator | 2025-06-03 15:44:49 | INFO  | Task 400485f3-2769-46f4-9849-939b73c51b8d is in state STARTED 2025-06-03 15:44:49.359671 | orchestrator | 2025-06-03 15:44:49 | INFO  | Task 06bf7594-82cc-4f39-a568-16db6170ae64 is in state STARTED 2025-06-03 15:44:49.359709 | orchestrator | 2025-06-03 15:44:49 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:44:52.393696 | orchestrator | 2025-06-03 15:44:52 | INFO  | Task fcce477d-3756-4483-afd3-a81142aa777b is in state STARTED 2025-06-03 15:44:52.395547 | orchestrator | 2025-06-03 15:44:52 | INFO  | Task 8c4e3a54-4480-4c9d-8534-5df5ce79864b is in state STARTED 2025-06-03 15:44:52.398680 | orchestrator | 2025-06-03 15:44:52 | INFO  | Task 780b8089-27c0-4c67-875c-1dfcad8ac922 is in state STARTED 2025-06-03 15:44:52.401178 | orchestrator | 2025-06-03 15:44:52 | INFO  | Task 400485f3-2769-46f4-9849-939b73c51b8d is in state STARTED 2025-06-03 15:44:52.403385 | orchestrator | 2025-06-03 15:44:52 | INFO  | Task 06bf7594-82cc-4f39-a568-16db6170ae64 is in state STARTED 2025-06-03 15:44:52.403425 | orchestrator | 2025-06-03 15:44:52 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:44:55.446431 | orchestrator | 2025-06-03 15:44:55 | INFO  | Task fcce477d-3756-4483-afd3-a81142aa777b is in state STARTED 2025-06-03 15:44:55.449096 | orchestrator | 2025-06-03 15:44:55 | INFO  | Task 8c4e3a54-4480-4c9d-8534-5df5ce79864b is in state STARTED 2025-06-03 15:44:55.450891 | orchestrator | 2025-06-03 15:44:55 | INFO  | Task 780b8089-27c0-4c67-875c-1dfcad8ac922 is in state STARTED 2025-06-03 15:44:55.454189 | orchestrator | 2025-06-03 15:44:55 | INFO  | Task 400485f3-2769-46f4-9849-939b73c51b8d is in state STARTED 2025-06-03 15:44:55.456137 | orchestrator | 2025-06-03 15:44:55 | INFO  | Task 06bf7594-82cc-4f39-a568-16db6170ae64 is in state STARTED 2025-06-03 15:44:55.456237 | orchestrator | 2025-06-03 15:44:55 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:44:58.509694 | orchestrator | 2025-06-03 15:44:58 | INFO  | Task fcce477d-3756-4483-afd3-a81142aa777b is in state STARTED 2025-06-03 15:44:58.511537 | orchestrator | 2025-06-03 15:44:58 | INFO  | Task 8c4e3a54-4480-4c9d-8534-5df5ce79864b is in state STARTED 2025-06-03 15:44:58.514695 | orchestrator | 2025-06-03 15:44:58 | INFO  | Task 780b8089-27c0-4c67-875c-1dfcad8ac922 is in state STARTED 2025-06-03 15:44:58.516284 | orchestrator | 2025-06-03 15:44:58 | INFO  | Task 400485f3-2769-46f4-9849-939b73c51b8d is in state STARTED 2025-06-03 15:44:58.518629 | orchestrator | 2025-06-03 15:44:58 | INFO  | Task 06bf7594-82cc-4f39-a568-16db6170ae64 is in state STARTED 2025-06-03 15:44:58.518682 | orchestrator | 2025-06-03 15:44:58 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:45:01.567561 | orchestrator | 2025-06-03 15:45:01 | INFO  | Task fcce477d-3756-4483-afd3-a81142aa777b is in state STARTED 2025-06-03 15:45:01.569308 | orchestrator | 2025-06-03 15:45:01 | INFO  | Task 8c4e3a54-4480-4c9d-8534-5df5ce79864b is in state STARTED 2025-06-03 15:45:01.572523 | orchestrator | 2025-06-03 15:45:01 | INFO  | Task 780b8089-27c0-4c67-875c-1dfcad8ac922 is in state STARTED 2025-06-03 15:45:01.574838 | orchestrator | 2025-06-03 15:45:01 | INFO  | Task 400485f3-2769-46f4-9849-939b73c51b8d is in state STARTED 2025-06-03 15:45:01.576939 | orchestrator | 2025-06-03 15:45:01 | INFO  | Task 06bf7594-82cc-4f39-a568-16db6170ae64 is in state STARTED 2025-06-03 15:45:01.576967 | orchestrator | 2025-06-03 15:45:01 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:45:04.615925 | orchestrator | 2025-06-03 15:45:04 | INFO  | Task fcce477d-3756-4483-afd3-a81142aa777b is in state STARTED 2025-06-03 15:45:04.616024 | orchestrator | 2025-06-03 15:45:04 | INFO  | Task 8c4e3a54-4480-4c9d-8534-5df5ce79864b is in state STARTED 2025-06-03 15:45:04.617764 | orchestrator | 2025-06-03 15:45:04 | INFO  | Task 780b8089-27c0-4c67-875c-1dfcad8ac922 is in state STARTED 2025-06-03 15:45:04.619230 | orchestrator | 2025-06-03 15:45:04 | INFO  | Task 400485f3-2769-46f4-9849-939b73c51b8d is in state STARTED 2025-06-03 15:45:04.619748 | orchestrator | 2025-06-03 15:45:04 | INFO  | Task 06bf7594-82cc-4f39-a568-16db6170ae64 is in state STARTED 2025-06-03 15:45:04.619938 | orchestrator | 2025-06-03 15:45:04 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:45:07.661785 | orchestrator | 2025-06-03 15:45:07 | INFO  | Task fcce477d-3756-4483-afd3-a81142aa777b is in state STARTED 2025-06-03 15:45:07.663042 | orchestrator | 2025-06-03 15:45:07 | INFO  | Task 8c4e3a54-4480-4c9d-8534-5df5ce79864b is in state STARTED 2025-06-03 15:45:07.664632 | orchestrator | 2025-06-03 15:45:07 | INFO  | Task 780b8089-27c0-4c67-875c-1dfcad8ac922 is in state STARTED 2025-06-03 15:45:07.667104 | orchestrator | 2025-06-03 15:45:07 | INFO  | Task 400485f3-2769-46f4-9849-939b73c51b8d is in state STARTED 2025-06-03 15:45:07.669176 | orchestrator | 2025-06-03 15:45:07 | INFO  | Task 06bf7594-82cc-4f39-a568-16db6170ae64 is in state STARTED 2025-06-03 15:45:07.669313 | orchestrator | 2025-06-03 15:45:07 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:45:10.710259 | orchestrator | 2025-06-03 15:45:10 | INFO  | Task fcce477d-3756-4483-afd3-a81142aa777b is in state STARTED 2025-06-03 15:45:10.710369 | orchestrator | 2025-06-03 15:45:10 | INFO  | Task 8c4e3a54-4480-4c9d-8534-5df5ce79864b is in state STARTED 2025-06-03 15:45:10.714131 | orchestrator | 2025-06-03 15:45:10 | INFO  | Task 780b8089-27c0-4c67-875c-1dfcad8ac922 is in state STARTED 2025-06-03 15:45:10.714213 | orchestrator | 2025-06-03 15:45:10 | INFO  | Task 400485f3-2769-46f4-9849-939b73c51b8d is in state STARTED 2025-06-03 15:45:10.714234 | orchestrator | 2025-06-03 15:45:10 | INFO  | Task 06bf7594-82cc-4f39-a568-16db6170ae64 is in state STARTED 2025-06-03 15:45:10.714244 | orchestrator | 2025-06-03 15:45:10 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:45:13.751939 | orchestrator | 2025-06-03 15:45:13 | INFO  | Task fcce477d-3756-4483-afd3-a81142aa777b is in state STARTED 2025-06-03 15:45:13.754199 | orchestrator | 2025-06-03 15:45:13 | INFO  | Task 8c4e3a54-4480-4c9d-8534-5df5ce79864b is in state STARTED 2025-06-03 15:45:13.755173 | orchestrator | 2025-06-03 15:45:13 | INFO  | Task 780b8089-27c0-4c67-875c-1dfcad8ac922 is in state STARTED 2025-06-03 15:45:13.755966 | orchestrator | 2025-06-03 15:45:13 | INFO  | Task 400485f3-2769-46f4-9849-939b73c51b8d is in state STARTED 2025-06-03 15:45:13.756793 | orchestrator | 2025-06-03 15:45:13 | INFO  | Task 06bf7594-82cc-4f39-a568-16db6170ae64 is in state STARTED 2025-06-03 15:45:13.757084 | orchestrator | 2025-06-03 15:45:13 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:45:16.786913 | orchestrator | 2025-06-03 15:45:16 | INFO  | Task fcce477d-3756-4483-afd3-a81142aa777b is in state STARTED 2025-06-03 15:45:16.788268 | orchestrator | 2025-06-03 15:45:16 | INFO  | Task 8c4e3a54-4480-4c9d-8534-5df5ce79864b is in state STARTED 2025-06-03 15:45:16.789664 | orchestrator | 2025-06-03 15:45:16 | INFO  | Task 780b8089-27c0-4c67-875c-1dfcad8ac922 is in state STARTED 2025-06-03 15:45:16.790423 | orchestrator | 2025-06-03 15:45:16 | INFO  | Task 400485f3-2769-46f4-9849-939b73c51b8d is in state STARTED 2025-06-03 15:45:16.791489 | orchestrator | 2025-06-03 15:45:16 | INFO  | Task 06bf7594-82cc-4f39-a568-16db6170ae64 is in state STARTED 2025-06-03 15:45:16.791560 | orchestrator | 2025-06-03 15:45:16 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:45:19.817595 | orchestrator | 2025-06-03 15:45:19 | INFO  | Task fcce477d-3756-4483-afd3-a81142aa777b is in state STARTED 2025-06-03 15:45:19.817687 | orchestrator | 2025-06-03 15:45:19 | INFO  | Task 8c4e3a54-4480-4c9d-8534-5df5ce79864b is in state STARTED 2025-06-03 15:45:19.817701 | orchestrator | 2025-06-03 15:45:19 | INFO  | Task 780b8089-27c0-4c67-875c-1dfcad8ac922 is in state STARTED 2025-06-03 15:45:19.818473 | orchestrator | 2025-06-03 15:45:19 | INFO  | Task 400485f3-2769-46f4-9849-939b73c51b8d is in state STARTED 2025-06-03 15:45:19.820091 | orchestrator | 2025-06-03 15:45:19 | INFO  | Task 06bf7594-82cc-4f39-a568-16db6170ae64 is in state STARTED 2025-06-03 15:45:19.820275 | orchestrator | 2025-06-03 15:45:19 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:45:22.845907 | orchestrator | 2025-06-03 15:45:22 | INFO  | Task fcce477d-3756-4483-afd3-a81142aa777b is in state STARTED 2025-06-03 15:45:22.846690 | orchestrator | 2025-06-03 15:45:22 | INFO  | Task 8c4e3a54-4480-4c9d-8534-5df5ce79864b is in state STARTED 2025-06-03 15:45:22.849085 | orchestrator | 2025-06-03 15:45:22 | INFO  | Task 780b8089-27c0-4c67-875c-1dfcad8ac922 is in state STARTED 2025-06-03 15:45:22.849920 | orchestrator | 2025-06-03 15:45:22 | INFO  | Task 400485f3-2769-46f4-9849-939b73c51b8d is in state STARTED 2025-06-03 15:45:22.851095 | orchestrator | 2025-06-03 15:45:22 | INFO  | Task 06bf7594-82cc-4f39-a568-16db6170ae64 is in state STARTED 2025-06-03 15:45:22.851182 | orchestrator | 2025-06-03 15:45:22 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:45:25.873031 | orchestrator | 2025-06-03 15:45:25 | INFO  | Task fcce477d-3756-4483-afd3-a81142aa777b is in state STARTED 2025-06-03 15:45:25.873732 | orchestrator | 2025-06-03 15:45:25 | INFO  | Task 8c4e3a54-4480-4c9d-8534-5df5ce79864b is in state STARTED 2025-06-03 15:45:25.874761 | orchestrator | 2025-06-03 15:45:25 | INFO  | Task 780b8089-27c0-4c67-875c-1dfcad8ac922 is in state STARTED 2025-06-03 15:45:25.876491 | orchestrator | 2025-06-03 15:45:25 | INFO  | Task 400485f3-2769-46f4-9849-939b73c51b8d is in state STARTED 2025-06-03 15:45:25.878338 | orchestrator | 2025-06-03 15:45:25 | INFO  | Task 06bf7594-82cc-4f39-a568-16db6170ae64 is in state STARTED 2025-06-03 15:45:25.878408 | orchestrator | 2025-06-03 15:45:25 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:45:28.912129 | orchestrator | 2025-06-03 15:45:28 | INFO  | Task fcce477d-3756-4483-afd3-a81142aa777b is in state STARTED 2025-06-03 15:45:28.914061 | orchestrator | 2025-06-03 15:45:28 | INFO  | Task 8c4e3a54-4480-4c9d-8534-5df5ce79864b is in state STARTED 2025-06-03 15:45:28.914761 | orchestrator | 2025-06-03 15:45:28 | INFO  | Task 780b8089-27c0-4c67-875c-1dfcad8ac922 is in state STARTED 2025-06-03 15:45:28.916466 | orchestrator | 2025-06-03 15:45:28 | INFO  | Task 400485f3-2769-46f4-9849-939b73c51b8d is in state STARTED 2025-06-03 15:45:28.916885 | orchestrator | 2025-06-03 15:45:28 | INFO  | Task 06bf7594-82cc-4f39-a568-16db6170ae64 is in state STARTED 2025-06-03 15:45:28.917111 | orchestrator | 2025-06-03 15:45:28 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:45:31.939916 | orchestrator | 2025-06-03 15:45:31 | INFO  | Task fcce477d-3756-4483-afd3-a81142aa777b is in state STARTED 2025-06-03 15:45:31.940204 | orchestrator | 2025-06-03 15:45:31 | INFO  | Task 8c4e3a54-4480-4c9d-8534-5df5ce79864b is in state STARTED 2025-06-03 15:45:31.941081 | orchestrator | 2025-06-03 15:45:31 | INFO  | Task 780b8089-27c0-4c67-875c-1dfcad8ac922 is in state STARTED 2025-06-03 15:45:31.941744 | orchestrator | 2025-06-03 15:45:31 | INFO  | Task 400485f3-2769-46f4-9849-939b73c51b8d is in state STARTED 2025-06-03 15:45:31.942627 | orchestrator | 2025-06-03 15:45:31 | INFO  | Task 06bf7594-82cc-4f39-a568-16db6170ae64 is in state STARTED 2025-06-03 15:45:31.942660 | orchestrator | 2025-06-03 15:45:31 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:45:34.970329 | orchestrator | 2025-06-03 15:45:34 | INFO  | Task fcce477d-3756-4483-afd3-a81142aa777b is in state STARTED 2025-06-03 15:45:34.970728 | orchestrator | 2025-06-03 15:45:34 | INFO  | Task 8c4e3a54-4480-4c9d-8534-5df5ce79864b is in state STARTED 2025-06-03 15:45:34.971349 | orchestrator | 2025-06-03 15:45:34 | INFO  | Task 780b8089-27c0-4c67-875c-1dfcad8ac922 is in state STARTED 2025-06-03 15:45:34.972042 | orchestrator | 2025-06-03 15:45:34 | INFO  | Task 400485f3-2769-46f4-9849-939b73c51b8d is in state STARTED 2025-06-03 15:45:34.972748 | orchestrator | 2025-06-03 15:45:34 | INFO  | Task 06bf7594-82cc-4f39-a568-16db6170ae64 is in state STARTED 2025-06-03 15:45:34.974723 | orchestrator | 2025-06-03 15:45:34 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:45:38.009716 | orchestrator | 2025-06-03 15:45:38 | INFO  | Task fcce477d-3756-4483-afd3-a81142aa777b is in state STARTED 2025-06-03 15:45:38.015452 | orchestrator | 2025-06-03 15:45:38 | INFO  | Task 8c4e3a54-4480-4c9d-8534-5df5ce79864b is in state STARTED 2025-06-03 15:45:38.015865 | orchestrator | 2025-06-03 15:45:38 | INFO  | Task 780b8089-27c0-4c67-875c-1dfcad8ac922 is in state STARTED 2025-06-03 15:45:38.019690 | orchestrator | 2025-06-03 15:45:38 | INFO  | Task 400485f3-2769-46f4-9849-939b73c51b8d is in state STARTED 2025-06-03 15:45:38.020446 | orchestrator | 2025-06-03 15:45:38 | INFO  | Task 06bf7594-82cc-4f39-a568-16db6170ae64 is in state STARTED 2025-06-03 15:45:38.020490 | orchestrator | 2025-06-03 15:45:38 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:45:41.043171 | orchestrator | 2025-06-03 15:45:41 | INFO  | Task fcce477d-3756-4483-afd3-a81142aa777b is in state STARTED 2025-06-03 15:45:41.043479 | orchestrator | 2025-06-03 15:45:41 | INFO  | Task 8c4e3a54-4480-4c9d-8534-5df5ce79864b is in state STARTED 2025-06-03 15:45:41.046744 | orchestrator | 2025-06-03 15:45:41 | INFO  | Task 780b8089-27c0-4c67-875c-1dfcad8ac922 is in state STARTED 2025-06-03 15:45:41.046830 | orchestrator | 2025-06-03 15:45:41 | INFO  | Task 400485f3-2769-46f4-9849-939b73c51b8d is in state STARTED 2025-06-03 15:45:41.047612 | orchestrator | 2025-06-03 15:45:41 | INFO  | Task 06bf7594-82cc-4f39-a568-16db6170ae64 is in state STARTED 2025-06-03 15:45:41.047649 | orchestrator | 2025-06-03 15:45:41 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:45:44.082217 | orchestrator | 2025-06-03 15:45:44 | INFO  | Task fcce477d-3756-4483-afd3-a81142aa777b is in state STARTED 2025-06-03 15:45:44.082551 | orchestrator | 2025-06-03 15:45:44 | INFO  | Task 8c4e3a54-4480-4c9d-8534-5df5ce79864b is in state STARTED 2025-06-03 15:45:44.083354 | orchestrator | 2025-06-03 15:45:44 | INFO  | Task 780b8089-27c0-4c67-875c-1dfcad8ac922 is in state STARTED 2025-06-03 15:45:44.083757 | orchestrator | 2025-06-03 15:45:44 | INFO  | Task 400485f3-2769-46f4-9849-939b73c51b8d is in state STARTED 2025-06-03 15:45:44.084633 | orchestrator | 2025-06-03 15:45:44 | INFO  | Task 06bf7594-82cc-4f39-a568-16db6170ae64 is in state STARTED 2025-06-03 15:45:44.084715 | orchestrator | 2025-06-03 15:45:44 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:45:47.127153 | orchestrator | 2025-06-03 15:45:47 | INFO  | Task fcce477d-3756-4483-afd3-a81142aa777b is in state STARTED 2025-06-03 15:45:47.127745 | orchestrator | 2025-06-03 15:45:47 | INFO  | Task 8c4e3a54-4480-4c9d-8534-5df5ce79864b is in state STARTED 2025-06-03 15:45:47.129017 | orchestrator | 2025-06-03 15:45:47 | INFO  | Task 780b8089-27c0-4c67-875c-1dfcad8ac922 is in state STARTED 2025-06-03 15:45:47.129822 | orchestrator | 2025-06-03 15:45:47 | INFO  | Task 400485f3-2769-46f4-9849-939b73c51b8d is in state STARTED 2025-06-03 15:45:47.130823 | orchestrator | 2025-06-03 15:45:47 | INFO  | Task 06bf7594-82cc-4f39-a568-16db6170ae64 is in state STARTED 2025-06-03 15:45:47.130855 | orchestrator | 2025-06-03 15:45:47 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:45:50.174287 | orchestrator | 2025-06-03 15:45:50 | INFO  | Task fcce477d-3756-4483-afd3-a81142aa777b is in state STARTED 2025-06-03 15:45:50.174401 | orchestrator | 2025-06-03 15:45:50 | INFO  | Task 8c4e3a54-4480-4c9d-8534-5df5ce79864b is in state STARTED 2025-06-03 15:45:50.174414 | orchestrator | 2025-06-03 15:45:50 | INFO  | Task 780b8089-27c0-4c67-875c-1dfcad8ac922 is in state STARTED 2025-06-03 15:45:50.175010 | orchestrator | 2025-06-03 15:45:50 | INFO  | Task 400485f3-2769-46f4-9849-939b73c51b8d is in state STARTED 2025-06-03 15:45:50.176350 | orchestrator | 2025-06-03 15:45:50 | INFO  | Task 06bf7594-82cc-4f39-a568-16db6170ae64 is in state STARTED 2025-06-03 15:45:50.176402 | orchestrator | 2025-06-03 15:45:50 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:45:53.200180 | orchestrator | 2025-06-03 15:45:53 | INFO  | Task fcce477d-3756-4483-afd3-a81142aa777b is in state STARTED 2025-06-03 15:45:53.200472 | orchestrator | 2025-06-03 15:45:53 | INFO  | Task 8c4e3a54-4480-4c9d-8534-5df5ce79864b is in state STARTED 2025-06-03 15:45:53.202105 | orchestrator | 2025-06-03 15:45:53 | INFO  | Task 780b8089-27c0-4c67-875c-1dfcad8ac922 is in state STARTED 2025-06-03 15:45:53.202546 | orchestrator | 2025-06-03 15:45:53 | INFO  | Task 400485f3-2769-46f4-9849-939b73c51b8d is in state STARTED 2025-06-03 15:45:53.204017 | orchestrator | 2025-06-03 15:45:53 | INFO  | Task 06bf7594-82cc-4f39-a568-16db6170ae64 is in state STARTED 2025-06-03 15:45:53.204063 | orchestrator | 2025-06-03 15:45:53 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:45:56.226915 | orchestrator | 2025-06-03 15:45:56 | INFO  | Task fcce477d-3756-4483-afd3-a81142aa777b is in state STARTED 2025-06-03 15:45:56.227019 | orchestrator | 2025-06-03 15:45:56 | INFO  | Task 8c4e3a54-4480-4c9d-8534-5df5ce79864b is in state STARTED 2025-06-03 15:45:56.227246 | orchestrator | 2025-06-03 15:45:56 | INFO  | Task 780b8089-27c0-4c67-875c-1dfcad8ac922 is in state STARTED 2025-06-03 15:45:56.227736 | orchestrator | 2025-06-03 15:45:56 | INFO  | Task 400485f3-2769-46f4-9849-939b73c51b8d is in state STARTED 2025-06-03 15:45:56.228712 | orchestrator | 2025-06-03 15:45:56 | INFO  | Task 06bf7594-82cc-4f39-a568-16db6170ae64 is in state STARTED 2025-06-03 15:45:56.228810 | orchestrator | 2025-06-03 15:45:56 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:45:59.258748 | orchestrator | 2025-06-03 15:45:59 | INFO  | Task fcce477d-3756-4483-afd3-a81142aa777b is in state STARTED 2025-06-03 15:45:59.259687 | orchestrator | 2025-06-03 15:45:59 | INFO  | Task 8c4e3a54-4480-4c9d-8534-5df5ce79864b is in state STARTED 2025-06-03 15:45:59.260965 | orchestrator | 2025-06-03 15:45:59 | INFO  | Task 780b8089-27c0-4c67-875c-1dfcad8ac922 is in state STARTED 2025-06-03 15:45:59.261835 | orchestrator | 2025-06-03 15:45:59 | INFO  | Task 400485f3-2769-46f4-9849-939b73c51b8d is in state STARTED 2025-06-03 15:45:59.262312 | orchestrator | 2025-06-03 15:45:59 | INFO  | Task 06bf7594-82cc-4f39-a568-16db6170ae64 is in state STARTED 2025-06-03 15:45:59.262356 | orchestrator | 2025-06-03 15:45:59 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:46:02.292912 | orchestrator | 2025-06-03 15:46:02 | INFO  | Task fcce477d-3756-4483-afd3-a81142aa777b is in state STARTED 2025-06-03 15:46:02.293065 | orchestrator | 2025-06-03 15:46:02 | INFO  | Task 8c4e3a54-4480-4c9d-8534-5df5ce79864b is in state SUCCESS 2025-06-03 15:46:02.293437 | orchestrator | 2025-06-03 15:46:02.293460 | orchestrator | 2025-06-03 15:46:02.293469 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2025-06-03 15:46:02.293478 | orchestrator | 2025-06-03 15:46:02.293486 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2025-06-03 15:46:02.293495 | orchestrator | Tuesday 03 June 2025 15:43:41 +0000 (0:00:00.229) 0:00:00.229 ********** 2025-06-03 15:46:02.293504 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2025-06-03 15:46:02.293515 | orchestrator | 2025-06-03 15:46:02.293525 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2025-06-03 15:46:02.293539 | orchestrator | Tuesday 03 June 2025 15:43:41 +0000 (0:00:00.231) 0:00:00.461 ********** 2025-06-03 15:46:02.293553 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2025-06-03 15:46:02.293566 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2025-06-03 15:46:02.293597 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2025-06-03 15:46:02.293611 | orchestrator | 2025-06-03 15:46:02.293624 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2025-06-03 15:46:02.293637 | orchestrator | Tuesday 03 June 2025 15:43:42 +0000 (0:00:01.209) 0:00:01.670 ********** 2025-06-03 15:46:02.293651 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2025-06-03 15:46:02.293664 | orchestrator | 2025-06-03 15:46:02.293676 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2025-06-03 15:46:02.293684 | orchestrator | Tuesday 03 June 2025 15:43:44 +0000 (0:00:01.157) 0:00:02.828 ********** 2025-06-03 15:46:02.293714 | orchestrator | changed: [testbed-manager] 2025-06-03 15:46:02.293723 | orchestrator | 2025-06-03 15:46:02.293731 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2025-06-03 15:46:02.293740 | orchestrator | Tuesday 03 June 2025 15:43:45 +0000 (0:00:00.963) 0:00:03.791 ********** 2025-06-03 15:46:02.293748 | orchestrator | changed: [testbed-manager] 2025-06-03 15:46:02.293755 | orchestrator | 2025-06-03 15:46:02.293763 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2025-06-03 15:46:02.293771 | orchestrator | Tuesday 03 June 2025 15:43:45 +0000 (0:00:00.883) 0:00:04.675 ********** 2025-06-03 15:46:02.293779 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2025-06-03 15:46:02.293787 | orchestrator | ok: [testbed-manager] 2025-06-03 15:46:02.293795 | orchestrator | 2025-06-03 15:46:02.293803 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2025-06-03 15:46:02.293811 | orchestrator | Tuesday 03 June 2025 15:44:23 +0000 (0:00:37.075) 0:00:41.750 ********** 2025-06-03 15:46:02.293819 | orchestrator | changed: [testbed-manager] => (item=ceph) 2025-06-03 15:46:02.293827 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2025-06-03 15:46:02.293835 | orchestrator | changed: [testbed-manager] => (item=rados) 2025-06-03 15:46:02.293842 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2025-06-03 15:46:02.293850 | orchestrator | changed: [testbed-manager] => (item=rbd) 2025-06-03 15:46:02.293858 | orchestrator | 2025-06-03 15:46:02.293866 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2025-06-03 15:46:02.293874 | orchestrator | Tuesday 03 June 2025 15:44:26 +0000 (0:00:03.801) 0:00:45.551 ********** 2025-06-03 15:46:02.293881 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2025-06-03 15:46:02.293889 | orchestrator | 2025-06-03 15:46:02.293897 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2025-06-03 15:46:02.293905 | orchestrator | Tuesday 03 June 2025 15:44:27 +0000 (0:00:00.467) 0:00:46.019 ********** 2025-06-03 15:46:02.293913 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:46:02.293921 | orchestrator | 2025-06-03 15:46:02.293928 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2025-06-03 15:46:02.293936 | orchestrator | Tuesday 03 June 2025 15:44:27 +0000 (0:00:00.146) 0:00:46.165 ********** 2025-06-03 15:46:02.293944 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:46:02.293951 | orchestrator | 2025-06-03 15:46:02.293959 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2025-06-03 15:46:02.293967 | orchestrator | Tuesday 03 June 2025 15:44:27 +0000 (0:00:00.300) 0:00:46.466 ********** 2025-06-03 15:46:02.293975 | orchestrator | changed: [testbed-manager] 2025-06-03 15:46:02.293982 | orchestrator | 2025-06-03 15:46:02.293990 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2025-06-03 15:46:02.293998 | orchestrator | Tuesday 03 June 2025 15:44:29 +0000 (0:00:01.651) 0:00:48.118 ********** 2025-06-03 15:46:02.294006 | orchestrator | changed: [testbed-manager] 2025-06-03 15:46:02.294014 | orchestrator | 2025-06-03 15:46:02.294189 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2025-06-03 15:46:02.294205 | orchestrator | Tuesday 03 June 2025 15:44:30 +0000 (0:00:00.731) 0:00:48.849 ********** 2025-06-03 15:46:02.294221 | orchestrator | changed: [testbed-manager] 2025-06-03 15:46:02.294237 | orchestrator | 2025-06-03 15:46:02.294335 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2025-06-03 15:46:02.294352 | orchestrator | Tuesday 03 June 2025 15:44:30 +0000 (0:00:00.595) 0:00:49.445 ********** 2025-06-03 15:46:02.294363 | orchestrator | ok: [testbed-manager] => (item=ceph) 2025-06-03 15:46:02.294372 | orchestrator | ok: [testbed-manager] => (item=rados) 2025-06-03 15:46:02.294381 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2025-06-03 15:46:02.294390 | orchestrator | ok: [testbed-manager] => (item=rbd) 2025-06-03 15:46:02.294398 | orchestrator | 2025-06-03 15:46:02.294419 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-03 15:46:02.294428 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-03 15:46:02.294438 | orchestrator | 2025-06-03 15:46:02.294447 | orchestrator | 2025-06-03 15:46:02.294470 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-03 15:46:02.294480 | orchestrator | Tuesday 03 June 2025 15:44:32 +0000 (0:00:01.295) 0:00:50.741 ********** 2025-06-03 15:46:02.294489 | orchestrator | =============================================================================== 2025-06-03 15:46:02.294497 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 37.08s 2025-06-03 15:46:02.294506 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 3.80s 2025-06-03 15:46:02.294514 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.65s 2025-06-03 15:46:02.294524 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.30s 2025-06-03 15:46:02.294539 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.21s 2025-06-03 15:46:02.294561 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.16s 2025-06-03 15:46:02.294585 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.96s 2025-06-03 15:46:02.294599 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.88s 2025-06-03 15:46:02.294615 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.73s 2025-06-03 15:46:02.294630 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.60s 2025-06-03 15:46:02.294644 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.47s 2025-06-03 15:46:02.294658 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.30s 2025-06-03 15:46:02.294667 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.23s 2025-06-03 15:46:02.294676 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.15s 2025-06-03 15:46:02.294684 | orchestrator | 2025-06-03 15:46:02.294693 | orchestrator | 2025-06-03 15:46:02.294701 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2025-06-03 15:46:02.294710 | orchestrator | 2025-06-03 15:46:02.294718 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2025-06-03 15:46:02.294727 | orchestrator | Tuesday 03 June 2025 15:44:35 +0000 (0:00:00.204) 0:00:00.204 ********** 2025-06-03 15:46:02.294735 | orchestrator | changed: [testbed-manager] 2025-06-03 15:46:02.294744 | orchestrator | 2025-06-03 15:46:02.294753 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2025-06-03 15:46:02.294761 | orchestrator | Tuesday 03 June 2025 15:44:37 +0000 (0:00:02.058) 0:00:02.263 ********** 2025-06-03 15:46:02.294770 | orchestrator | changed: [testbed-manager] 2025-06-03 15:46:02.294778 | orchestrator | 2025-06-03 15:46:02.294787 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2025-06-03 15:46:02.294795 | orchestrator | Tuesday 03 June 2025 15:44:38 +0000 (0:00:00.884) 0:00:03.147 ********** 2025-06-03 15:46:02.294804 | orchestrator | changed: [testbed-manager] 2025-06-03 15:46:02.294812 | orchestrator | 2025-06-03 15:46:02.294821 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2025-06-03 15:46:02.294830 | orchestrator | Tuesday 03 June 2025 15:44:39 +0000 (0:00:00.900) 0:00:04.047 ********** 2025-06-03 15:46:02.294838 | orchestrator | changed: [testbed-manager] 2025-06-03 15:46:02.294847 | orchestrator | 2025-06-03 15:46:02.294855 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2025-06-03 15:46:02.294864 | orchestrator | Tuesday 03 June 2025 15:44:40 +0000 (0:00:01.084) 0:00:05.132 ********** 2025-06-03 15:46:02.294873 | orchestrator | changed: [testbed-manager] 2025-06-03 15:46:02.294881 | orchestrator | 2025-06-03 15:46:02.294890 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2025-06-03 15:46:02.294907 | orchestrator | Tuesday 03 June 2025 15:44:41 +0000 (0:00:01.073) 0:00:06.205 ********** 2025-06-03 15:46:02.294915 | orchestrator | changed: [testbed-manager] 2025-06-03 15:46:02.294924 | orchestrator | 2025-06-03 15:46:02.294932 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2025-06-03 15:46:02.294941 | orchestrator | Tuesday 03 June 2025 15:44:42 +0000 (0:00:01.044) 0:00:07.250 ********** 2025-06-03 15:46:02.294949 | orchestrator | changed: [testbed-manager] 2025-06-03 15:46:02.294958 | orchestrator | 2025-06-03 15:46:02.294966 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2025-06-03 15:46:02.294975 | orchestrator | Tuesday 03 June 2025 15:44:44 +0000 (0:00:02.031) 0:00:09.282 ********** 2025-06-03 15:46:02.294983 | orchestrator | changed: [testbed-manager] 2025-06-03 15:46:02.294992 | orchestrator | 2025-06-03 15:46:02.295001 | orchestrator | TASK [Create admin user] ******************************************************* 2025-06-03 15:46:02.295009 | orchestrator | Tuesday 03 June 2025 15:44:45 +0000 (0:00:01.014) 0:00:10.296 ********** 2025-06-03 15:46:02.295018 | orchestrator | changed: [testbed-manager] 2025-06-03 15:46:02.295026 | orchestrator | 2025-06-03 15:46:02.295034 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2025-06-03 15:46:02.295043 | orchestrator | Tuesday 03 June 2025 15:45:37 +0000 (0:00:51.508) 0:01:01.804 ********** 2025-06-03 15:46:02.295052 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:46:02.295060 | orchestrator | 2025-06-03 15:46:02.295069 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-06-03 15:46:02.295077 | orchestrator | 2025-06-03 15:46:02.295086 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-06-03 15:46:02.295094 | orchestrator | Tuesday 03 June 2025 15:45:37 +0000 (0:00:00.131) 0:01:01.936 ********** 2025-06-03 15:46:02.295103 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:46:02.295111 | orchestrator | 2025-06-03 15:46:02.295120 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-06-03 15:46:02.295128 | orchestrator | 2025-06-03 15:46:02.295137 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-06-03 15:46:02.295146 | orchestrator | Tuesday 03 June 2025 15:45:49 +0000 (0:00:11.594) 0:01:13.531 ********** 2025-06-03 15:46:02.295170 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:46:02.295179 | orchestrator | 2025-06-03 15:46:02.295188 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-06-03 15:46:02.295196 | orchestrator | 2025-06-03 15:46:02.295212 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-06-03 15:46:02.295221 | orchestrator | Tuesday 03 June 2025 15:46:00 +0000 (0:00:11.355) 0:01:24.886 ********** 2025-06-03 15:46:02.295230 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:46:02.295239 | orchestrator | 2025-06-03 15:46:02.295322 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-03 15:46:02.295344 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-03 15:46:02.295358 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-03 15:46:02.295373 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-03 15:46:02.295394 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-03 15:46:02.295407 | orchestrator | 2025-06-03 15:46:02.295421 | orchestrator | 2025-06-03 15:46:02.295434 | orchestrator | 2025-06-03 15:46:02.295448 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-03 15:46:02.295460 | orchestrator | Tuesday 03 June 2025 15:46:01 +0000 (0:00:01.233) 0:01:26.120 ********** 2025-06-03 15:46:02.295475 | orchestrator | =============================================================================== 2025-06-03 15:46:02.295502 | orchestrator | Create admin user ------------------------------------------------------ 51.51s 2025-06-03 15:46:02.295518 | orchestrator | Restart ceph manager service ------------------------------------------- 24.18s 2025-06-03 15:46:02.295533 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 2.06s 2025-06-03 15:46:02.295549 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 2.03s 2025-06-03 15:46:02.295564 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.08s 2025-06-03 15:46:02.295578 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 1.07s 2025-06-03 15:46:02.295593 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 1.04s 2025-06-03 15:46:02.295608 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.01s 2025-06-03 15:46:02.295624 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 0.90s 2025-06-03 15:46:02.295639 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 0.88s 2025-06-03 15:46:02.295656 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.13s 2025-06-03 15:46:02.297688 | orchestrator | 2025-06-03 15:46:02 | INFO  | Task 780b8089-27c0-4c67-875c-1dfcad8ac922 is in state STARTED 2025-06-03 15:46:02.298357 | orchestrator | 2025-06-03 15:46:02 | INFO  | Task 400485f3-2769-46f4-9849-939b73c51b8d is in state STARTED 2025-06-03 15:46:02.301293 | orchestrator | 2025-06-03 15:46:02 | INFO  | Task 06bf7594-82cc-4f39-a568-16db6170ae64 is in state STARTED 2025-06-03 15:46:02.301375 | orchestrator | 2025-06-03 15:46:02 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:46:05.338559 | orchestrator | 2025-06-03 15:46:05 | INFO  | Task fcce477d-3756-4483-afd3-a81142aa777b is in state STARTED 2025-06-03 15:46:05.338721 | orchestrator | 2025-06-03 15:46:05 | INFO  | Task 780b8089-27c0-4c67-875c-1dfcad8ac922 is in state STARTED 2025-06-03 15:46:05.339479 | orchestrator | 2025-06-03 15:46:05 | INFO  | Task 400485f3-2769-46f4-9849-939b73c51b8d is in state STARTED 2025-06-03 15:46:05.340126 | orchestrator | 2025-06-03 15:46:05 | INFO  | Task 06bf7594-82cc-4f39-a568-16db6170ae64 is in state STARTED 2025-06-03 15:46:05.340442 | orchestrator | 2025-06-03 15:46:05 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:46:08.365634 | orchestrator | 2025-06-03 15:46:08 | INFO  | Task fcce477d-3756-4483-afd3-a81142aa777b is in state STARTED 2025-06-03 15:46:08.365733 | orchestrator | 2025-06-03 15:46:08 | INFO  | Task 780b8089-27c0-4c67-875c-1dfcad8ac922 is in state STARTED 2025-06-03 15:46:08.366622 | orchestrator | 2025-06-03 15:46:08 | INFO  | Task 400485f3-2769-46f4-9849-939b73c51b8d is in state STARTED 2025-06-03 15:46:08.368090 | orchestrator | 2025-06-03 15:46:08 | INFO  | Task 06bf7594-82cc-4f39-a568-16db6170ae64 is in state STARTED 2025-06-03 15:46:08.368139 | orchestrator | 2025-06-03 15:46:08 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:46:11.402672 | orchestrator | 2025-06-03 15:46:11 | INFO  | Task fcce477d-3756-4483-afd3-a81142aa777b is in state STARTED 2025-06-03 15:46:11.402931 | orchestrator | 2025-06-03 15:46:11 | INFO  | Task 780b8089-27c0-4c67-875c-1dfcad8ac922 is in state STARTED 2025-06-03 15:46:11.403720 | orchestrator | 2025-06-03 15:46:11 | INFO  | Task 400485f3-2769-46f4-9849-939b73c51b8d is in state STARTED 2025-06-03 15:46:11.404229 | orchestrator | 2025-06-03 15:46:11 | INFO  | Task 06bf7594-82cc-4f39-a568-16db6170ae64 is in state STARTED 2025-06-03 15:46:11.404279 | orchestrator | 2025-06-03 15:46:11 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:46:14.456920 | orchestrator | 2025-06-03 15:46:14 | INFO  | Task fcce477d-3756-4483-afd3-a81142aa777b is in state STARTED 2025-06-03 15:46:14.457217 | orchestrator | 2025-06-03 15:46:14 | INFO  | Task 780b8089-27c0-4c67-875c-1dfcad8ac922 is in state STARTED 2025-06-03 15:46:14.460061 | orchestrator | 2025-06-03 15:46:14 | INFO  | Task 400485f3-2769-46f4-9849-939b73c51b8d is in state STARTED 2025-06-03 15:46:14.460689 | orchestrator | 2025-06-03 15:46:14 | INFO  | Task 06bf7594-82cc-4f39-a568-16db6170ae64 is in state STARTED 2025-06-03 15:46:14.460718 | orchestrator | 2025-06-03 15:46:14 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:46:17.509951 | orchestrator | 2025-06-03 15:46:17 | INFO  | Task fcce477d-3756-4483-afd3-a81142aa777b is in state STARTED 2025-06-03 15:46:17.510201 | orchestrator | 2025-06-03 15:46:17 | INFO  | Task 780b8089-27c0-4c67-875c-1dfcad8ac922 is in state STARTED 2025-06-03 15:46:17.510993 | orchestrator | 2025-06-03 15:46:17 | INFO  | Task 400485f3-2769-46f4-9849-939b73c51b8d is in state STARTED 2025-06-03 15:46:17.512684 | orchestrator | 2025-06-03 15:46:17 | INFO  | Task 06bf7594-82cc-4f39-a568-16db6170ae64 is in state STARTED 2025-06-03 15:46:17.512724 | orchestrator | 2025-06-03 15:46:17 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:46:20.550761 | orchestrator | 2025-06-03 15:46:20 | INFO  | Task fcce477d-3756-4483-afd3-a81142aa777b is in state STARTED 2025-06-03 15:46:20.552707 | orchestrator | 2025-06-03 15:46:20 | INFO  | Task 780b8089-27c0-4c67-875c-1dfcad8ac922 is in state STARTED 2025-06-03 15:46:20.552996 | orchestrator | 2025-06-03 15:46:20 | INFO  | Task 400485f3-2769-46f4-9849-939b73c51b8d is in state STARTED 2025-06-03 15:46:20.553700 | orchestrator | 2025-06-03 15:46:20 | INFO  | Task 06bf7594-82cc-4f39-a568-16db6170ae64 is in state STARTED 2025-06-03 15:46:20.553736 | orchestrator | 2025-06-03 15:46:20 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:46:23.584131 | orchestrator | 2025-06-03 15:46:23 | INFO  | Task fcce477d-3756-4483-afd3-a81142aa777b is in state STARTED 2025-06-03 15:46:23.587548 | orchestrator | 2025-06-03 15:46:23 | INFO  | Task 780b8089-27c0-4c67-875c-1dfcad8ac922 is in state STARTED 2025-06-03 15:46:23.588248 | orchestrator | 2025-06-03 15:46:23 | INFO  | Task 400485f3-2769-46f4-9849-939b73c51b8d is in state STARTED 2025-06-03 15:46:23.590184 | orchestrator | 2025-06-03 15:46:23 | INFO  | Task 06bf7594-82cc-4f39-a568-16db6170ae64 is in state STARTED 2025-06-03 15:46:23.590235 | orchestrator | 2025-06-03 15:46:23 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:46:26.621458 | orchestrator | 2025-06-03 15:46:26 | INFO  | Task fcce477d-3756-4483-afd3-a81142aa777b is in state STARTED 2025-06-03 15:46:26.621558 | orchestrator | 2025-06-03 15:46:26 | INFO  | Task 780b8089-27c0-4c67-875c-1dfcad8ac922 is in state STARTED 2025-06-03 15:46:26.622194 | orchestrator | 2025-06-03 15:46:26 | INFO  | Task 400485f3-2769-46f4-9849-939b73c51b8d is in state STARTED 2025-06-03 15:46:26.623132 | orchestrator | 2025-06-03 15:46:26 | INFO  | Task 06bf7594-82cc-4f39-a568-16db6170ae64 is in state STARTED 2025-06-03 15:46:26.623168 | orchestrator | 2025-06-03 15:46:26 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:46:29.672490 | orchestrator | 2025-06-03 15:46:29.672577 | orchestrator | 2025-06-03 15:46:29 | INFO  | Task fcce477d-3756-4483-afd3-a81142aa777b is in state SUCCESS 2025-06-03 15:46:29.674539 | orchestrator | 2025-06-03 15:46:29.674602 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-03 15:46:29.674610 | orchestrator | 2025-06-03 15:46:29.674616 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-03 15:46:29.674623 | orchestrator | Tuesday 03 June 2025 15:44:35 +0000 (0:00:00.198) 0:00:00.198 ********** 2025-06-03 15:46:29.674632 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:46:29.674667 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:46:29.674673 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:46:29.674678 | orchestrator | 2025-06-03 15:46:29.674683 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-03 15:46:29.674689 | orchestrator | Tuesday 03 June 2025 15:44:35 +0000 (0:00:00.283) 0:00:00.482 ********** 2025-06-03 15:46:29.674694 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2025-06-03 15:46:29.674700 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2025-06-03 15:46:29.674705 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2025-06-03 15:46:29.674711 | orchestrator | 2025-06-03 15:46:29.674716 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2025-06-03 15:46:29.674721 | orchestrator | 2025-06-03 15:46:29.674726 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-06-03 15:46:29.674731 | orchestrator | Tuesday 03 June 2025 15:44:36 +0000 (0:00:00.496) 0:00:00.979 ********** 2025-06-03 15:46:29.674737 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:46:29.674743 | orchestrator | 2025-06-03 15:46:29.674748 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2025-06-03 15:46:29.674754 | orchestrator | Tuesday 03 June 2025 15:44:37 +0000 (0:00:00.781) 0:00:01.760 ********** 2025-06-03 15:46:29.674760 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2025-06-03 15:46:29.674765 | orchestrator | 2025-06-03 15:46:29.674770 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2025-06-03 15:46:29.674775 | orchestrator | Tuesday 03 June 2025 15:44:40 +0000 (0:00:03.901) 0:00:05.661 ********** 2025-06-03 15:46:29.674780 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2025-06-03 15:46:29.674786 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2025-06-03 15:46:29.674791 | orchestrator | 2025-06-03 15:46:29.674796 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2025-06-03 15:46:29.674813 | orchestrator | Tuesday 03 June 2025 15:44:47 +0000 (0:00:06.817) 0:00:12.479 ********** 2025-06-03 15:46:29.674818 | orchestrator | changed: [testbed-node-0] => (item=service) 2025-06-03 15:46:29.674823 | orchestrator | 2025-06-03 15:46:29.674828 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2025-06-03 15:46:29.674833 | orchestrator | Tuesday 03 June 2025 15:44:51 +0000 (0:00:03.414) 0:00:15.893 ********** 2025-06-03 15:46:29.674839 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-03 15:46:29.674844 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2025-06-03 15:46:29.674849 | orchestrator | 2025-06-03 15:46:29.674854 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2025-06-03 15:46:29.674859 | orchestrator | Tuesday 03 June 2025 15:44:55 +0000 (0:00:03.947) 0:00:19.841 ********** 2025-06-03 15:46:29.674864 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-03 15:46:29.674870 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2025-06-03 15:46:29.674878 | orchestrator | changed: [testbed-node-0] => (item=creator) 2025-06-03 15:46:29.674886 | orchestrator | changed: [testbed-node-0] => (item=observer) 2025-06-03 15:46:29.674895 | orchestrator | changed: [testbed-node-0] => (item=audit) 2025-06-03 15:46:29.674902 | orchestrator | 2025-06-03 15:46:29.674910 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2025-06-03 15:46:29.674919 | orchestrator | Tuesday 03 June 2025 15:45:09 +0000 (0:00:14.119) 0:00:33.960 ********** 2025-06-03 15:46:29.674927 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2025-06-03 15:46:29.674934 | orchestrator | 2025-06-03 15:46:29.674942 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2025-06-03 15:46:29.674951 | orchestrator | Tuesday 03 June 2025 15:45:13 +0000 (0:00:04.080) 0:00:38.041 ********** 2025-06-03 15:46:29.674969 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-03 15:46:29.674998 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-03 15:46:29.675007 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-03 15:46:29.675022 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-03 15:46:29.675031 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-03 15:46:29.675047 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-03 15:46:29.675061 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-03 15:46:29.675071 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-03 15:46:29.675079 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-03 15:46:29.675088 | orchestrator | 2025-06-03 15:46:29.675095 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2025-06-03 15:46:29.675100 | orchestrator | Tuesday 03 June 2025 15:45:15 +0000 (0:00:02.131) 0:00:40.172 ********** 2025-06-03 15:46:29.675106 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2025-06-03 15:46:29.675111 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2025-06-03 15:46:29.675116 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2025-06-03 15:46:29.675121 | orchestrator | 2025-06-03 15:46:29.675126 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2025-06-03 15:46:29.675134 | orchestrator | Tuesday 03 June 2025 15:45:16 +0000 (0:00:01.231) 0:00:41.404 ********** 2025-06-03 15:46:29.675140 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:46:29.675145 | orchestrator | 2025-06-03 15:46:29.675150 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2025-06-03 15:46:29.675155 | orchestrator | Tuesday 03 June 2025 15:45:17 +0000 (0:00:00.374) 0:00:41.778 ********** 2025-06-03 15:46:29.675160 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:46:29.675165 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:46:29.675170 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:46:29.675175 | orchestrator | 2025-06-03 15:46:29.675180 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-06-03 15:46:29.675190 | orchestrator | Tuesday 03 June 2025 15:45:17 +0000 (0:00:00.916) 0:00:42.695 ********** 2025-06-03 15:46:29.675196 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:46:29.675201 | orchestrator | 2025-06-03 15:46:29.675206 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2025-06-03 15:46:29.675211 | orchestrator | Tuesday 03 June 2025 15:45:18 +0000 (0:00:00.512) 0:00:43.207 ********** 2025-06-03 15:46:29.675247 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-03 15:46:29.675258 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-03 15:46:29.675264 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-03 15:46:29.675273 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-03 15:46:29.675283 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-03 15:46:29.675288 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-03 15:46:29.675294 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-03 15:46:29.675305 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-03 15:46:29.675310 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-03 15:46:29.675316 | orchestrator | 2025-06-03 15:46:29.675321 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2025-06-03 15:46:29.675326 | orchestrator | Tuesday 03 June 2025 15:45:21 +0000 (0:00:03.136) 0:00:46.343 ********** 2025-06-03 15:46:29.675335 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-03 15:46:29.675344 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-03 15:46:29.675351 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-03 15:46:29.675356 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:46:29.675367 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-03 15:46:29.675376 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-03 15:46:29.675385 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-03 15:46:29.675402 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-03 15:46:29.675408 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-03 15:46:29.675414 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-03 15:46:29.675420 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:46:29.675425 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:46:29.675430 | orchestrator | 2025-06-03 15:46:29.675435 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2025-06-03 15:46:29.675440 | orchestrator | Tuesday 03 June 2025 15:45:22 +0000 (0:00:01.253) 0:00:47.597 ********** 2025-06-03 15:46:29.675452 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-03 15:46:29.675462 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-03 15:46:29.675481 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-03 15:46:29.675487 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:46:29.675492 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-03 15:46:29.675497 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-03 15:46:29.675503 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-03 15:46:29.675508 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:46:29.675518 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-03 15:46:29.675523 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-03 15:46:29.675536 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-03 15:46:29.675542 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:46:29.675547 | orchestrator | 2025-06-03 15:46:29.675552 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2025-06-03 15:46:29.675557 | orchestrator | Tuesday 03 June 2025 15:45:23 +0000 (0:00:00.853) 0:00:48.451 ********** 2025-06-03 15:46:29.675563 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-03 15:46:29.675578 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-03 15:46:29.675583 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-03 15:46:29.675593 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-03 15:46:29.675601 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-03 15:46:29.675607 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-03 15:46:29.675612 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-03 15:46:29.675621 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-03 15:46:29.675627 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-03 15:46:29.675636 | orchestrator | 2025-06-03 15:46:29.675641 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2025-06-03 15:46:29.675646 | orchestrator | Tuesday 03 June 2025 15:45:26 +0000 (0:00:03.115) 0:00:51.567 ********** 2025-06-03 15:46:29.675651 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:46:29.675657 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:46:29.675662 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:46:29.675667 | orchestrator | 2025-06-03 15:46:29.675672 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2025-06-03 15:46:29.675677 | orchestrator | Tuesday 03 June 2025 15:45:29 +0000 (0:00:02.790) 0:00:54.357 ********** 2025-06-03 15:46:29.675682 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-03 15:46:29.675688 | orchestrator | 2025-06-03 15:46:29.675693 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2025-06-03 15:46:29.675698 | orchestrator | Tuesday 03 June 2025 15:45:30 +0000 (0:00:00.879) 0:00:55.237 ********** 2025-06-03 15:46:29.675703 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:46:29.675708 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:46:29.675713 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:46:29.675718 | orchestrator | 2025-06-03 15:46:29.675723 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2025-06-03 15:46:29.675728 | orchestrator | Tuesday 03 June 2025 15:45:31 +0000 (0:00:01.127) 0:00:56.365 ********** 2025-06-03 15:46:29.675737 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-03 15:46:29.675742 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-03 15:46:29.675752 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-03 15:46:29.675764 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-03 15:46:29.675769 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-03 15:46:29.675781 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-03 15:46:29.675791 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-03 15:46:29.675797 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-03 15:46:29.675803 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-03 15:46:29.675812 | orchestrator | 2025-06-03 15:46:29.675818 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2025-06-03 15:46:29.675827 | orchestrator | Tuesday 03 June 2025 15:45:40 +0000 (0:00:09.159) 0:01:05.524 ********** 2025-06-03 15:46:29.675832 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-03 15:46:29.675838 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-03 15:46:29.675846 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-03 15:46:29.675851 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:46:29.675857 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-03 15:46:29.675862 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-03 15:46:29.675876 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-03 15:46:29.675881 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:46:29.675887 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-03 15:46:29.675895 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-03 15:46:29.675900 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-03 15:46:29.675906 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:46:29.675911 | orchestrator | 2025-06-03 15:46:29.675916 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2025-06-03 15:46:29.675921 | orchestrator | Tuesday 03 June 2025 15:45:42 +0000 (0:00:01.335) 0:01:06.860 ********** 2025-06-03 15:46:29.675927 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-03 15:46:29.675940 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-03 15:46:29.675946 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-03 15:46:29.675954 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-03 15:46:29.675960 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-03 15:46:29.675965 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-03 15:46:29.675975 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-03 15:46:29.675986 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-03 15:46:29.675991 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-03 15:46:29.675996 | orchestrator | 2025-06-03 15:46:29.676002 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-06-03 15:46:29.676007 | orchestrator | Tuesday 03 June 2025 15:45:45 +0000 (0:00:03.823) 0:01:10.683 ********** 2025-06-03 15:46:29.676012 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:46:29.676017 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:46:29.676022 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:46:29.676030 | orchestrator | 2025-06-03 15:46:29.676038 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2025-06-03 15:46:29.676046 | orchestrator | Tuesday 03 June 2025 15:45:46 +0000 (0:00:00.824) 0:01:11.508 ********** 2025-06-03 15:46:29.676055 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:46:29.676062 | orchestrator | 2025-06-03 15:46:29.676070 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2025-06-03 15:46:29.676078 | orchestrator | Tuesday 03 June 2025 15:45:48 +0000 (0:00:02.169) 0:01:13.677 ********** 2025-06-03 15:46:29.676086 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:46:29.676094 | orchestrator | 2025-06-03 15:46:29.676102 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2025-06-03 15:46:29.676114 | orchestrator | Tuesday 03 June 2025 15:45:51 +0000 (0:00:02.386) 0:01:16.063 ********** 2025-06-03 15:46:29.676122 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:46:29.676130 | orchestrator | 2025-06-03 15:46:29.676138 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-06-03 15:46:29.676146 | orchestrator | Tuesday 03 June 2025 15:46:03 +0000 (0:00:12.155) 0:01:28.219 ********** 2025-06-03 15:46:29.676155 | orchestrator | 2025-06-03 15:46:29.676163 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-06-03 15:46:29.676171 | orchestrator | Tuesday 03 June 2025 15:46:03 +0000 (0:00:00.074) 0:01:28.293 ********** 2025-06-03 15:46:29.676180 | orchestrator | 2025-06-03 15:46:29.676188 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-06-03 15:46:29.676197 | orchestrator | Tuesday 03 June 2025 15:46:03 +0000 (0:00:00.045) 0:01:28.338 ********** 2025-06-03 15:46:29.676208 | orchestrator | 2025-06-03 15:46:29.676273 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2025-06-03 15:46:29.676281 | orchestrator | Tuesday 03 June 2025 15:46:03 +0000 (0:00:00.055) 0:01:28.394 ********** 2025-06-03 15:46:29.676286 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:46:29.676291 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:46:29.676296 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:46:29.676301 | orchestrator | 2025-06-03 15:46:29.676306 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2025-06-03 15:46:29.676311 | orchestrator | Tuesday 03 June 2025 15:46:16 +0000 (0:00:12.629) 0:01:41.023 ********** 2025-06-03 15:46:29.676316 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:46:29.676322 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:46:29.676327 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:46:29.676332 | orchestrator | 2025-06-03 15:46:29.676337 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2025-06-03 15:46:29.676342 | orchestrator | Tuesday 03 June 2025 15:46:22 +0000 (0:00:06.130) 0:01:47.154 ********** 2025-06-03 15:46:29.676347 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:46:29.676352 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:46:29.676357 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:46:29.676362 | orchestrator | 2025-06-03 15:46:29.676367 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-03 15:46:29.676374 | orchestrator | testbed-node-0 : ok=24  changed=19  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-06-03 15:46:29.676379 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-03 15:46:29.676385 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-03 15:46:29.676390 | orchestrator | 2025-06-03 15:46:29.676395 | orchestrator | 2025-06-03 15:46:29.676400 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-03 15:46:29.676405 | orchestrator | Tuesday 03 June 2025 15:46:28 +0000 (0:00:05.793) 0:01:52.947 ********** 2025-06-03 15:46:29.676410 | orchestrator | =============================================================================== 2025-06-03 15:46:29.676415 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 14.12s 2025-06-03 15:46:29.676425 | orchestrator | barbican : Restart barbican-api container ------------------------------ 12.63s 2025-06-03 15:46:29.676431 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 12.16s 2025-06-03 15:46:29.676436 | orchestrator | barbican : Copying over barbican.conf ----------------------------------- 9.16s 2025-06-03 15:46:29.676441 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 6.82s 2025-06-03 15:46:29.676446 | orchestrator | barbican : Restart barbican-keystone-listener container ----------------- 6.13s 2025-06-03 15:46:29.676451 | orchestrator | barbican : Restart barbican-worker container ---------------------------- 5.79s 2025-06-03 15:46:29.676456 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 4.08s 2025-06-03 15:46:29.676461 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 3.95s 2025-06-03 15:46:29.676466 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 3.90s 2025-06-03 15:46:29.676471 | orchestrator | barbican : Check barbican containers ------------------------------------ 3.82s 2025-06-03 15:46:29.676476 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.41s 2025-06-03 15:46:29.676481 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 3.14s 2025-06-03 15:46:29.676486 | orchestrator | barbican : Copying over config.json files for services ------------------ 3.12s 2025-06-03 15:46:29.676491 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 2.79s 2025-06-03 15:46:29.676501 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.39s 2025-06-03 15:46:29.676507 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.17s 2025-06-03 15:46:29.676512 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 2.13s 2025-06-03 15:46:29.676517 | orchestrator | barbican : Copying over existing policy file ---------------------------- 1.34s 2025-06-03 15:46:29.676522 | orchestrator | service-cert-copy : barbican | Copying over backend internal TLS certificate --- 1.25s 2025-06-03 15:46:29.676527 | orchestrator | 2025-06-03 15:46:29 | INFO  | Task 780b8089-27c0-4c67-875c-1dfcad8ac922 is in state STARTED 2025-06-03 15:46:29.676533 | orchestrator | 2025-06-03 15:46:29 | INFO  | Task 400485f3-2769-46f4-9849-939b73c51b8d is in state STARTED 2025-06-03 15:46:29.676544 | orchestrator | 2025-06-03 15:46:29 | INFO  | Task 06bf7594-82cc-4f39-a568-16db6170ae64 is in state STARTED 2025-06-03 15:46:29.676706 | orchestrator | 2025-06-03 15:46:29 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:46:32.709668 | orchestrator | 2025-06-03 15:46:32 | INFO  | Task 7e609db4-277b-429d-b892-29894225b5ab is in state STARTED 2025-06-03 15:46:32.710564 | orchestrator | 2025-06-03 15:46:32 | INFO  | Task 780b8089-27c0-4c67-875c-1dfcad8ac922 is in state STARTED 2025-06-03 15:46:32.712564 | orchestrator | 2025-06-03 15:46:32 | INFO  | Task 400485f3-2769-46f4-9849-939b73c51b8d is in state STARTED 2025-06-03 15:46:32.713492 | orchestrator | 2025-06-03 15:46:32 | INFO  | Task 06bf7594-82cc-4f39-a568-16db6170ae64 is in state STARTED 2025-06-03 15:46:32.713551 | orchestrator | 2025-06-03 15:46:32 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:46:35.761079 | orchestrator | 2025-06-03 15:46:35 | INFO  | Task 7e609db4-277b-429d-b892-29894225b5ab is in state STARTED 2025-06-03 15:46:35.764035 | orchestrator | 2025-06-03 15:46:35 | INFO  | Task 780b8089-27c0-4c67-875c-1dfcad8ac922 is in state STARTED 2025-06-03 15:46:35.764157 | orchestrator | 2025-06-03 15:46:35 | INFO  | Task 400485f3-2769-46f4-9849-939b73c51b8d is in state STARTED 2025-06-03 15:46:35.764182 | orchestrator | 2025-06-03 15:46:35 | INFO  | Task 06bf7594-82cc-4f39-a568-16db6170ae64 is in state STARTED 2025-06-03 15:46:35.764202 | orchestrator | 2025-06-03 15:46:35 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:46:38.796559 | orchestrator | 2025-06-03 15:46:38 | INFO  | Task 7e609db4-277b-429d-b892-29894225b5ab is in state STARTED 2025-06-03 15:46:38.796656 | orchestrator | 2025-06-03 15:46:38 | INFO  | Task 780b8089-27c0-4c67-875c-1dfcad8ac922 is in state STARTED 2025-06-03 15:46:38.797363 | orchestrator | 2025-06-03 15:46:38 | INFO  | Task 400485f3-2769-46f4-9849-939b73c51b8d is in state STARTED 2025-06-03 15:46:38.798362 | orchestrator | 2025-06-03 15:46:38 | INFO  | Task 06bf7594-82cc-4f39-a568-16db6170ae64 is in state STARTED 2025-06-03 15:46:38.798408 | orchestrator | 2025-06-03 15:46:38 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:46:41.847340 | orchestrator | 2025-06-03 15:46:41 | INFO  | Task 7e609db4-277b-429d-b892-29894225b5ab is in state STARTED 2025-06-03 15:46:41.848623 | orchestrator | 2025-06-03 15:46:41 | INFO  | Task 780b8089-27c0-4c67-875c-1dfcad8ac922 is in state STARTED 2025-06-03 15:46:41.849467 | orchestrator | 2025-06-03 15:46:41 | INFO  | Task 5b431dbb-f7ef-4ca7-8158-0c40aee4025a is in state STARTED 2025-06-03 15:46:41.850412 | orchestrator | 2025-06-03 15:46:41 | INFO  | Task 400485f3-2769-46f4-9849-939b73c51b8d is in state STARTED 2025-06-03 15:46:41.851177 | orchestrator | 2025-06-03 15:46:41 | INFO  | Task 06bf7594-82cc-4f39-a568-16db6170ae64 is in state STARTED 2025-06-03 15:46:41.851217 | orchestrator | 2025-06-03 15:46:41 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:46:44.885590 | orchestrator | 2025-06-03 15:46:44 | INFO  | Task 7e609db4-277b-429d-b892-29894225b5ab is in state STARTED 2025-06-03 15:46:44.888955 | orchestrator | 2025-06-03 15:46:44 | INFO  | Task 780b8089-27c0-4c67-875c-1dfcad8ac922 is in state STARTED 2025-06-03 15:46:44.889821 | orchestrator | 2025-06-03 15:46:44 | INFO  | Task 5b431dbb-f7ef-4ca7-8158-0c40aee4025a is in state STARTED 2025-06-03 15:46:44.890688 | orchestrator | 2025-06-03 15:46:44 | INFO  | Task 400485f3-2769-46f4-9849-939b73c51b8d is in state STARTED 2025-06-03 15:46:44.891612 | orchestrator | 2025-06-03 15:46:44 | INFO  | Task 06bf7594-82cc-4f39-a568-16db6170ae64 is in state STARTED 2025-06-03 15:46:44.891654 | orchestrator | 2025-06-03 15:46:44 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:46:47.922369 | orchestrator | 2025-06-03 15:46:47 | INFO  | Task 7e609db4-277b-429d-b892-29894225b5ab is in state STARTED 2025-06-03 15:46:47.922671 | orchestrator | 2025-06-03 15:46:47 | INFO  | Task 780b8089-27c0-4c67-875c-1dfcad8ac922 is in state STARTED 2025-06-03 15:46:47.924551 | orchestrator | 2025-06-03 15:46:47 | INFO  | Task 5b431dbb-f7ef-4ca7-8158-0c40aee4025a is in state STARTED 2025-06-03 15:46:47.924858 | orchestrator | 2025-06-03 15:46:47 | INFO  | Task 400485f3-2769-46f4-9849-939b73c51b8d is in state STARTED 2025-06-03 15:46:47.925691 | orchestrator | 2025-06-03 15:46:47 | INFO  | Task 06bf7594-82cc-4f39-a568-16db6170ae64 is in state STARTED 2025-06-03 15:46:47.925883 | orchestrator | 2025-06-03 15:46:47 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:46:50.966403 | orchestrator | 2025-06-03 15:46:50 | INFO  | Task 7e609db4-277b-429d-b892-29894225b5ab is in state STARTED 2025-06-03 15:46:50.967654 | orchestrator | 2025-06-03 15:46:50 | INFO  | Task 780b8089-27c0-4c67-875c-1dfcad8ac922 is in state STARTED 2025-06-03 15:46:50.972617 | orchestrator | 2025-06-03 15:46:50 | INFO  | Task 5b431dbb-f7ef-4ca7-8158-0c40aee4025a is in state STARTED 2025-06-03 15:46:50.973094 | orchestrator | 2025-06-03 15:46:50 | INFO  | Task 400485f3-2769-46f4-9849-939b73c51b8d is in state STARTED 2025-06-03 15:46:50.973770 | orchestrator | 2025-06-03 15:46:50 | INFO  | Task 06bf7594-82cc-4f39-a568-16db6170ae64 is in state STARTED 2025-06-03 15:46:50.973793 | orchestrator | 2025-06-03 15:46:50 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:46:54.031772 | orchestrator | 2025-06-03 15:46:54 | INFO  | Task 7e609db4-277b-429d-b892-29894225b5ab is in state STARTED 2025-06-03 15:46:54.031864 | orchestrator | 2025-06-03 15:46:54 | INFO  | Task 780b8089-27c0-4c67-875c-1dfcad8ac922 is in state STARTED 2025-06-03 15:46:54.031871 | orchestrator | 2025-06-03 15:46:54 | INFO  | Task 5b431dbb-f7ef-4ca7-8158-0c40aee4025a is in state STARTED 2025-06-03 15:46:54.033255 | orchestrator | 2025-06-03 15:46:54 | INFO  | Task 400485f3-2769-46f4-9849-939b73c51b8d is in state STARTED 2025-06-03 15:46:54.034082 | orchestrator | 2025-06-03 15:46:54 | INFO  | Task 06bf7594-82cc-4f39-a568-16db6170ae64 is in state STARTED 2025-06-03 15:46:54.034145 | orchestrator | 2025-06-03 15:46:54 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:46:57.066073 | orchestrator | 2025-06-03 15:46:57 | INFO  | Task 7e609db4-277b-429d-b892-29894225b5ab is in state STARTED 2025-06-03 15:46:57.066338 | orchestrator | 2025-06-03 15:46:57 | INFO  | Task 780b8089-27c0-4c67-875c-1dfcad8ac922 is in state STARTED 2025-06-03 15:46:57.067066 | orchestrator | 2025-06-03 15:46:57 | INFO  | Task 5b431dbb-f7ef-4ca7-8158-0c40aee4025a is in state STARTED 2025-06-03 15:46:57.067814 | orchestrator | 2025-06-03 15:46:57 | INFO  | Task 400485f3-2769-46f4-9849-939b73c51b8d is in state STARTED 2025-06-03 15:46:57.069604 | orchestrator | 2025-06-03 15:46:57 | INFO  | Task 06bf7594-82cc-4f39-a568-16db6170ae64 is in state STARTED 2025-06-03 15:46:57.069639 | orchestrator | 2025-06-03 15:46:57 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:47:00.101734 | orchestrator | 2025-06-03 15:47:00 | INFO  | Task 7e609db4-277b-429d-b892-29894225b5ab is in state STARTED 2025-06-03 15:47:00.102108 | orchestrator | 2025-06-03 15:47:00 | INFO  | Task 780b8089-27c0-4c67-875c-1dfcad8ac922 is in state STARTED 2025-06-03 15:47:00.102344 | orchestrator | 2025-06-03 15:47:00 | INFO  | Task 5b431dbb-f7ef-4ca7-8158-0c40aee4025a is in state SUCCESS 2025-06-03 15:47:00.102826 | orchestrator | 2025-06-03 15:47:00 | INFO  | Task 400485f3-2769-46f4-9849-939b73c51b8d is in state STARTED 2025-06-03 15:47:00.103711 | orchestrator | 2025-06-03 15:47:00 | INFO  | Task 06bf7594-82cc-4f39-a568-16db6170ae64 is in state STARTED 2025-06-03 15:47:00.103735 | orchestrator | 2025-06-03 15:47:00 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:47:03.143923 | orchestrator | 2025-06-03 15:47:03 | INFO  | Task 7e609db4-277b-429d-b892-29894225b5ab is in state STARTED 2025-06-03 15:47:03.146317 | orchestrator | 2025-06-03 15:47:03 | INFO  | Task 780b8089-27c0-4c67-875c-1dfcad8ac922 is in state STARTED 2025-06-03 15:47:03.148347 | orchestrator | 2025-06-03 15:47:03 | INFO  | Task 400485f3-2769-46f4-9849-939b73c51b8d is in state STARTED 2025-06-03 15:47:03.150444 | orchestrator | 2025-06-03 15:47:03 | INFO  | Task 06bf7594-82cc-4f39-a568-16db6170ae64 is in state STARTED 2025-06-03 15:47:03.150494 | orchestrator | 2025-06-03 15:47:03 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:47:06.194856 | orchestrator | 2025-06-03 15:47:06 | INFO  | Task 7e609db4-277b-429d-b892-29894225b5ab is in state STARTED 2025-06-03 15:47:06.197390 | orchestrator | 2025-06-03 15:47:06 | INFO  | Task 780b8089-27c0-4c67-875c-1dfcad8ac922 is in state STARTED 2025-06-03 15:47:06.198579 | orchestrator | 2025-06-03 15:47:06 | INFO  | Task 400485f3-2769-46f4-9849-939b73c51b8d is in state STARTED 2025-06-03 15:47:06.199853 | orchestrator | 2025-06-03 15:47:06 | INFO  | Task 06bf7594-82cc-4f39-a568-16db6170ae64 is in state STARTED 2025-06-03 15:47:06.199911 | orchestrator | 2025-06-03 15:47:06 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:47:09.242657 | orchestrator | 2025-06-03 15:47:09 | INFO  | Task 7e609db4-277b-429d-b892-29894225b5ab is in state STARTED 2025-06-03 15:47:09.244477 | orchestrator | 2025-06-03 15:47:09 | INFO  | Task 780b8089-27c0-4c67-875c-1dfcad8ac922 is in state STARTED 2025-06-03 15:47:09.248610 | orchestrator | 2025-06-03 15:47:09 | INFO  | Task 400485f3-2769-46f4-9849-939b73c51b8d is in state STARTED 2025-06-03 15:47:09.251742 | orchestrator | 2025-06-03 15:47:09 | INFO  | Task 06bf7594-82cc-4f39-a568-16db6170ae64 is in state STARTED 2025-06-03 15:47:09.251820 | orchestrator | 2025-06-03 15:47:09 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:47:12.293027 | orchestrator | 2025-06-03 15:47:12 | INFO  | Task 7e609db4-277b-429d-b892-29894225b5ab is in state STARTED 2025-06-03 15:47:12.299040 | orchestrator | 2025-06-03 15:47:12 | INFO  | Task 780b8089-27c0-4c67-875c-1dfcad8ac922 is in state STARTED 2025-06-03 15:47:12.299116 | orchestrator | 2025-06-03 15:47:12 | INFO  | Task 400485f3-2769-46f4-9849-939b73c51b8d is in state STARTED 2025-06-03 15:47:12.300913 | orchestrator | 2025-06-03 15:47:12 | INFO  | Task 06bf7594-82cc-4f39-a568-16db6170ae64 is in state STARTED 2025-06-03 15:47:12.301338 | orchestrator | 2025-06-03 15:47:12 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:47:15.339019 | orchestrator | 2025-06-03 15:47:15 | INFO  | Task 7e609db4-277b-429d-b892-29894225b5ab is in state STARTED 2025-06-03 15:47:15.341734 | orchestrator | 2025-06-03 15:47:15 | INFO  | Task 780b8089-27c0-4c67-875c-1dfcad8ac922 is in state STARTED 2025-06-03 15:47:15.341803 | orchestrator | 2025-06-03 15:47:15 | INFO  | Task 400485f3-2769-46f4-9849-939b73c51b8d is in state STARTED 2025-06-03 15:47:15.341812 | orchestrator | 2025-06-03 15:47:15 | INFO  | Task 06bf7594-82cc-4f39-a568-16db6170ae64 is in state STARTED 2025-06-03 15:47:15.341822 | orchestrator | 2025-06-03 15:47:15 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:47:18.372472 | orchestrator | 2025-06-03 15:47:18 | INFO  | Task 7e609db4-277b-429d-b892-29894225b5ab is in state STARTED 2025-06-03 15:47:18.374974 | orchestrator | 2025-06-03 15:47:18 | INFO  | Task 780b8089-27c0-4c67-875c-1dfcad8ac922 is in state STARTED 2025-06-03 15:47:18.377632 | orchestrator | 2025-06-03 15:47:18 | INFO  | Task 400485f3-2769-46f4-9849-939b73c51b8d is in state STARTED 2025-06-03 15:47:18.379626 | orchestrator | 2025-06-03 15:47:18 | INFO  | Task 06bf7594-82cc-4f39-a568-16db6170ae64 is in state STARTED 2025-06-03 15:47:18.379665 | orchestrator | 2025-06-03 15:47:18 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:47:21.429018 | orchestrator | 2025-06-03 15:47:21 | INFO  | Task 7e609db4-277b-429d-b892-29894225b5ab is in state STARTED 2025-06-03 15:47:21.431748 | orchestrator | 2025-06-03 15:47:21 | INFO  | Task 780b8089-27c0-4c67-875c-1dfcad8ac922 is in state STARTED 2025-06-03 15:47:21.431959 | orchestrator | 2025-06-03 15:47:21 | INFO  | Task 400485f3-2769-46f4-9849-939b73c51b8d is in state STARTED 2025-06-03 15:47:21.434362 | orchestrator | 2025-06-03 15:47:21 | INFO  | Task 06bf7594-82cc-4f39-a568-16db6170ae64 is in state STARTED 2025-06-03 15:47:21.434417 | orchestrator | 2025-06-03 15:47:21 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:47:24.477399 | orchestrator | 2025-06-03 15:47:24 | INFO  | Task 7e609db4-277b-429d-b892-29894225b5ab is in state STARTED 2025-06-03 15:47:24.479003 | orchestrator | 2025-06-03 15:47:24 | INFO  | Task 780b8089-27c0-4c67-875c-1dfcad8ac922 is in state STARTED 2025-06-03 15:47:24.480482 | orchestrator | 2025-06-03 15:47:24 | INFO  | Task 400485f3-2769-46f4-9849-939b73c51b8d is in state STARTED 2025-06-03 15:47:24.483052 | orchestrator | 2025-06-03 15:47:24 | INFO  | Task 06bf7594-82cc-4f39-a568-16db6170ae64 is in state STARTED 2025-06-03 15:47:24.483104 | orchestrator | 2025-06-03 15:47:24 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:47:27.534667 | orchestrator | 2025-06-03 15:47:27 | INFO  | Task 7e609db4-277b-429d-b892-29894225b5ab is in state STARTED 2025-06-03 15:47:27.536807 | orchestrator | 2025-06-03 15:47:27 | INFO  | Task 780b8089-27c0-4c67-875c-1dfcad8ac922 is in state STARTED 2025-06-03 15:47:27.538248 | orchestrator | 2025-06-03 15:47:27 | INFO  | Task 400485f3-2769-46f4-9849-939b73c51b8d is in state STARTED 2025-06-03 15:47:27.539305 | orchestrator | 2025-06-03 15:47:27 | INFO  | Task 06bf7594-82cc-4f39-a568-16db6170ae64 is in state STARTED 2025-06-03 15:47:27.539454 | orchestrator | 2025-06-03 15:47:27 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:47:30.579836 | orchestrator | 2025-06-03 15:47:30 | INFO  | Task 7e609db4-277b-429d-b892-29894225b5ab is in state STARTED 2025-06-03 15:47:30.584132 | orchestrator | 2025-06-03 15:47:30 | INFO  | Task 780b8089-27c0-4c67-875c-1dfcad8ac922 is in state STARTED 2025-06-03 15:47:30.584301 | orchestrator | 2025-06-03 15:47:30 | INFO  | Task 400485f3-2769-46f4-9849-939b73c51b8d is in state STARTED 2025-06-03 15:47:30.584314 | orchestrator | 2025-06-03 15:47:30 | INFO  | Task 06bf7594-82cc-4f39-a568-16db6170ae64 is in state STARTED 2025-06-03 15:47:30.584346 | orchestrator | 2025-06-03 15:47:30 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:47:33.650933 | orchestrator | 2025-06-03 15:47:33 | INFO  | Task 7e609db4-277b-429d-b892-29894225b5ab is in state STARTED 2025-06-03 15:47:33.651032 | orchestrator | 2025-06-03 15:47:33 | INFO  | Task 780b8089-27c0-4c67-875c-1dfcad8ac922 is in state STARTED 2025-06-03 15:47:33.652716 | orchestrator | 2025-06-03 15:47:33 | INFO  | Task 400485f3-2769-46f4-9849-939b73c51b8d is in state STARTED 2025-06-03 15:47:33.653999 | orchestrator | 2025-06-03 15:47:33 | INFO  | Task 06bf7594-82cc-4f39-a568-16db6170ae64 is in state STARTED 2025-06-03 15:47:33.654132 | orchestrator | 2025-06-03 15:47:33 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:47:36.697868 | orchestrator | 2025-06-03 15:47:36 | INFO  | Task 7e609db4-277b-429d-b892-29894225b5ab is in state STARTED 2025-06-03 15:47:36.700497 | orchestrator | 2025-06-03 15:47:36 | INFO  | Task 780b8089-27c0-4c67-875c-1dfcad8ac922 is in state STARTED 2025-06-03 15:47:36.702360 | orchestrator | 2025-06-03 15:47:36 | INFO  | Task 400485f3-2769-46f4-9849-939b73c51b8d is in state STARTED 2025-06-03 15:47:36.704070 | orchestrator | 2025-06-03 15:47:36 | INFO  | Task 06bf7594-82cc-4f39-a568-16db6170ae64 is in state STARTED 2025-06-03 15:47:36.704576 | orchestrator | 2025-06-03 15:47:36 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:47:39.745717 | orchestrator | 2025-06-03 15:47:39 | INFO  | Task 7e609db4-277b-429d-b892-29894225b5ab is in state STARTED 2025-06-03 15:47:39.747309 | orchestrator | 2025-06-03 15:47:39 | INFO  | Task 780b8089-27c0-4c67-875c-1dfcad8ac922 is in state STARTED 2025-06-03 15:47:39.748793 | orchestrator | 2025-06-03 15:47:39 | INFO  | Task 400485f3-2769-46f4-9849-939b73c51b8d is in state STARTED 2025-06-03 15:47:39.750576 | orchestrator | 2025-06-03 15:47:39 | INFO  | Task 06bf7594-82cc-4f39-a568-16db6170ae64 is in state STARTED 2025-06-03 15:47:39.750622 | orchestrator | 2025-06-03 15:47:39 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:47:42.792658 | orchestrator | 2025-06-03 15:47:42 | INFO  | Task 7e609db4-277b-429d-b892-29894225b5ab is in state STARTED 2025-06-03 15:47:42.795006 | orchestrator | 2025-06-03 15:47:42 | INFO  | Task 780b8089-27c0-4c67-875c-1dfcad8ac922 is in state STARTED 2025-06-03 15:47:42.797925 | orchestrator | 2025-06-03 15:47:42 | INFO  | Task 400485f3-2769-46f4-9849-939b73c51b8d is in state STARTED 2025-06-03 15:47:42.800491 | orchestrator | 2025-06-03 15:47:42 | INFO  | Task 06bf7594-82cc-4f39-a568-16db6170ae64 is in state STARTED 2025-06-03 15:47:42.800563 | orchestrator | 2025-06-03 15:47:42 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:47:45.843909 | orchestrator | 2025-06-03 15:47:45.843988 | orchestrator | None 2025-06-03 15:47:45.843998 | orchestrator | 2025-06-03 15:47:45.844005 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-03 15:47:45.844013 | orchestrator | 2025-06-03 15:47:45.844019 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-03 15:47:45.844026 | orchestrator | Tuesday 03 June 2025 15:46:35 +0000 (0:00:00.916) 0:00:00.916 ********** 2025-06-03 15:47:45.844033 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:47:45.844040 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:47:45.844046 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:47:45.844053 | orchestrator | 2025-06-03 15:47:45.844059 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-03 15:47:45.844066 | orchestrator | Tuesday 03 June 2025 15:46:35 +0000 (0:00:00.376) 0:00:01.292 ********** 2025-06-03 15:47:45.844073 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2025-06-03 15:47:45.844081 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2025-06-03 15:47:45.844117 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2025-06-03 15:47:45.844129 | orchestrator | 2025-06-03 15:47:45.844188 | orchestrator | PLAY [Apply role placement] **************************************************** 2025-06-03 15:47:45.844200 | orchestrator | 2025-06-03 15:47:45.844253 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-06-03 15:47:45.844264 | orchestrator | Tuesday 03 June 2025 15:46:36 +0000 (0:00:00.421) 0:00:01.713 ********** 2025-06-03 15:47:45.844274 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:47:45.844286 | orchestrator | 2025-06-03 15:47:45.844292 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2025-06-03 15:47:45.844298 | orchestrator | Tuesday 03 June 2025 15:46:36 +0000 (0:00:00.539) 0:00:02.253 ********** 2025-06-03 15:47:45.844304 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2025-06-03 15:47:45.844311 | orchestrator | 2025-06-03 15:47:45.844317 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2025-06-03 15:47:45.844323 | orchestrator | Tuesday 03 June 2025 15:46:40 +0000 (0:00:03.855) 0:00:06.109 ********** 2025-06-03 15:47:45.844344 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2025-06-03 15:47:45.844351 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2025-06-03 15:47:45.844357 | orchestrator | 2025-06-03 15:47:45.844363 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2025-06-03 15:47:45.844369 | orchestrator | Tuesday 03 June 2025 15:46:47 +0000 (0:00:07.031) 0:00:13.141 ********** 2025-06-03 15:47:45.844376 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-03 15:47:45.844382 | orchestrator | 2025-06-03 15:47:45.844388 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2025-06-03 15:47:45.844394 | orchestrator | Tuesday 03 June 2025 15:46:51 +0000 (0:00:03.415) 0:00:16.557 ********** 2025-06-03 15:47:45.844401 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-03 15:47:45.844407 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2025-06-03 15:47:45.844413 | orchestrator | 2025-06-03 15:47:45.844419 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2025-06-03 15:47:45.844425 | orchestrator | Tuesday 03 June 2025 15:46:54 +0000 (0:00:03.855) 0:00:20.412 ********** 2025-06-03 15:47:45.844431 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-03 15:47:45.844437 | orchestrator | 2025-06-03 15:47:45.844445 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2025-06-03 15:47:45.844453 | orchestrator | Tuesday 03 June 2025 15:46:58 +0000 (0:00:03.252) 0:00:23.664 ********** 2025-06-03 15:47:45.844460 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2025-06-03 15:47:45.844467 | orchestrator | 2025-06-03 15:47:45.844474 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-06-03 15:47:45.844481 | orchestrator | Tuesday 03 June 2025 15:47:02 +0000 (0:00:03.929) 0:00:27.594 ********** 2025-06-03 15:47:45.844488 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:47:45.844495 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:47:45.844502 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:47:45.844509 | orchestrator | 2025-06-03 15:47:45.844516 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2025-06-03 15:47:45.844523 | orchestrator | Tuesday 03 June 2025 15:47:02 +0000 (0:00:00.288) 0:00:27.882 ********** 2025-06-03 15:47:45.844533 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-03 15:47:45.844568 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-03 15:47:45.844580 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-03 15:47:45.844588 | orchestrator | 2025-06-03 15:47:45.844595 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2025-06-03 15:47:45.844603 | orchestrator | Tuesday 03 June 2025 15:47:03 +0000 (0:00:00.828) 0:00:28.710 ********** 2025-06-03 15:47:45.844610 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:47:45.844617 | orchestrator | 2025-06-03 15:47:45.844624 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2025-06-03 15:47:45.844631 | orchestrator | Tuesday 03 June 2025 15:47:03 +0000 (0:00:00.134) 0:00:28.845 ********** 2025-06-03 15:47:45.844638 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:47:45.844645 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:47:45.844652 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:47:45.844659 | orchestrator | 2025-06-03 15:47:45.844666 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-06-03 15:47:45.844673 | orchestrator | Tuesday 03 June 2025 15:47:03 +0000 (0:00:00.503) 0:00:29.349 ********** 2025-06-03 15:47:45.844680 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:47:45.844687 | orchestrator | 2025-06-03 15:47:45.844694 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2025-06-03 15:47:45.844701 | orchestrator | Tuesday 03 June 2025 15:47:04 +0000 (0:00:00.532) 0:00:29.881 ********** 2025-06-03 15:47:45.844709 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-03 15:47:45.844726 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-03 15:47:45.844734 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-03 15:47:45.844742 | orchestrator | 2025-06-03 15:47:45.844749 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2025-06-03 15:47:45.844756 | orchestrator | Tuesday 03 June 2025 15:47:05 +0000 (0:00:01.477) 0:00:31.359 ********** 2025-06-03 15:47:45.844767 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-03 15:47:45.844774 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:47:45.844782 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-03 15:47:45.844793 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:47:45.844805 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-03 15:47:45.844813 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:47:45.844819 | orchestrator | 2025-06-03 15:47:45.844826 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2025-06-03 15:47:45.844832 | orchestrator | Tuesday 03 June 2025 15:47:06 +0000 (0:00:00.703) 0:00:32.062 ********** 2025-06-03 15:47:45.844842 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-03 15:47:45.844853 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:47:45.844867 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-03 15:47:45.844877 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:47:45.844887 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-03 15:47:45.844904 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:47:45.844914 | orchestrator | 2025-06-03 15:47:45.844924 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2025-06-03 15:47:45.844935 | orchestrator | Tuesday 03 June 2025 15:47:07 +0000 (0:00:00.676) 0:00:32.739 ********** 2025-06-03 15:47:45.844953 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-03 15:47:45.844964 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-03 15:47:45.844979 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-03 15:47:45.844986 | orchestrator | 2025-06-03 15:47:45.844992 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2025-06-03 15:47:45.844998 | orchestrator | Tuesday 03 June 2025 15:47:08 +0000 (0:00:01.384) 0:00:34.123 ********** 2025-06-03 15:47:45.845010 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-03 15:47:45.845017 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-03 15:47:45.845027 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-03 15:47:45.845034 | orchestrator | 2025-06-03 15:47:45.845040 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2025-06-03 15:47:45.845046 | orchestrator | Tuesday 03 June 2025 15:47:12 +0000 (0:00:03.511) 0:00:37.635 ********** 2025-06-03 15:47:45.845053 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-06-03 15:47:45.845059 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-06-03 15:47:45.845065 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-06-03 15:47:45.845072 | orchestrator | 2025-06-03 15:47:45.845078 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2025-06-03 15:47:45.845084 | orchestrator | Tuesday 03 June 2025 15:47:13 +0000 (0:00:01.440) 0:00:39.075 ********** 2025-06-03 15:47:45.845090 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:47:45.845096 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:47:45.845102 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:47:45.845109 | orchestrator | 2025-06-03 15:47:45.845115 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2025-06-03 15:47:45.845121 | orchestrator | Tuesday 03 June 2025 15:47:14 +0000 (0:00:01.302) 0:00:40.378 ********** 2025-06-03 15:47:45.845178 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-03 15:47:45.845188 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:47:45.845195 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-03 15:47:45.845201 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:47:45.845214 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-03 15:47:45.845221 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:47:45.845227 | orchestrator | 2025-06-03 15:47:45.845233 | orchestrator | TASK [placement : Check placement containers] ********************************** 2025-06-03 15:47:45.845279 | orchestrator | Tuesday 03 June 2025 15:47:15 +0000 (0:00:00.524) 0:00:40.902 ********** 2025-06-03 15:47:45.845291 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-03 15:47:45.845322 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-03 15:47:45.845335 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-03 15:47:45.845344 | orchestrator | 2025-06-03 15:47:45.845350 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2025-06-03 15:47:45.845356 | orchestrator | Tuesday 03 June 2025 15:47:17 +0000 (0:00:01.871) 0:00:42.774 ********** 2025-06-03 15:47:45.845363 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:47:45.845369 | orchestrator | 2025-06-03 15:47:45.845375 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2025-06-03 15:47:45.845381 | orchestrator | Tuesday 03 June 2025 15:47:19 +0000 (0:00:02.130) 0:00:44.904 ********** 2025-06-03 15:47:45.845387 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:47:45.845393 | orchestrator | 2025-06-03 15:47:45.845399 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2025-06-03 15:47:45.845405 | orchestrator | Tuesday 03 June 2025 15:47:21 +0000 (0:00:02.157) 0:00:47.062 ********** 2025-06-03 15:47:45.845411 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:47:45.845417 | orchestrator | 2025-06-03 15:47:45.845424 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-06-03 15:47:45.845430 | orchestrator | Tuesday 03 June 2025 15:47:36 +0000 (0:00:14.526) 0:01:01.588 ********** 2025-06-03 15:47:45.845436 | orchestrator | 2025-06-03 15:47:45.845442 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-06-03 15:47:45.845448 | orchestrator | Tuesday 03 June 2025 15:47:36 +0000 (0:00:00.065) 0:01:01.654 ********** 2025-06-03 15:47:45.845454 | orchestrator | 2025-06-03 15:47:45.845465 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-06-03 15:47:45.845472 | orchestrator | Tuesday 03 June 2025 15:47:36 +0000 (0:00:00.065) 0:01:01.719 ********** 2025-06-03 15:47:45.845478 | orchestrator | 2025-06-03 15:47:45.845484 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2025-06-03 15:47:45.845490 | orchestrator | Tuesday 03 June 2025 15:47:36 +0000 (0:00:00.066) 0:01:01.786 ********** 2025-06-03 15:47:45.845496 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:47:45.845503 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:47:45.845511 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:47:45.845530 | orchestrator | 2025-06-03 15:47:45.845539 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-03 15:47:45.845550 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-03 15:47:45.845562 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-03 15:47:45.845571 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-03 15:47:45.845581 | orchestrator | 2025-06-03 15:47:45.845592 | orchestrator | 2025-06-03 15:47:45.845599 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-03 15:47:45.845605 | orchestrator | Tuesday 03 June 2025 15:47:44 +0000 (0:00:07.859) 0:01:09.645 ********** 2025-06-03 15:47:45.845611 | orchestrator | =============================================================================== 2025-06-03 15:47:45.845617 | orchestrator | placement : Running placement bootstrap container ---------------------- 14.53s 2025-06-03 15:47:45.845623 | orchestrator | placement : Restart placement-api container ----------------------------- 7.86s 2025-06-03 15:47:45.845629 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 7.03s 2025-06-03 15:47:45.845635 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 3.93s 2025-06-03 15:47:45.845646 | orchestrator | service-ks-register : placement | Creating services --------------------- 3.86s 2025-06-03 15:47:45.845652 | orchestrator | service-ks-register : placement | Creating users ------------------------ 3.86s 2025-06-03 15:47:45.845658 | orchestrator | placement : Copying over placement.conf --------------------------------- 3.51s 2025-06-03 15:47:45.845664 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.42s 2025-06-03 15:47:45.845670 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.25s 2025-06-03 15:47:45.845679 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.16s 2025-06-03 15:47:45.845690 | orchestrator | placement : Creating placement databases -------------------------------- 2.13s 2025-06-03 15:47:45.845699 | orchestrator | placement : Check placement containers ---------------------------------- 1.87s 2025-06-03 15:47:45.845709 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.48s 2025-06-03 15:47:45.845719 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.44s 2025-06-03 15:47:45.845729 | orchestrator | placement : Copying over config.json files for services ----------------- 1.38s 2025-06-03 15:47:45.845739 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.30s 2025-06-03 15:47:45.845750 | orchestrator | placement : Ensuring config directories exist --------------------------- 0.83s 2025-06-03 15:47:45.845757 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS certificate --- 0.70s 2025-06-03 15:47:45.845764 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 0.68s 2025-06-03 15:47:45.845770 | orchestrator | placement : include_tasks ----------------------------------------------- 0.54s 2025-06-03 15:47:45.845880 | orchestrator | 2025-06-03 15:47:45 | INFO  | Task 7e609db4-277b-429d-b892-29894225b5ab is in state SUCCESS 2025-06-03 15:47:45.845892 | orchestrator | 2025-06-03 15:47:45 | INFO  | Task 780b8089-27c0-4c67-875c-1dfcad8ac922 is in state STARTED 2025-06-03 15:47:45.845902 | orchestrator | 2025-06-03 15:47:45 | INFO  | Task 400485f3-2769-46f4-9849-939b73c51b8d is in state STARTED 2025-06-03 15:47:45.845911 | orchestrator | 2025-06-03 15:47:45 | INFO  | Task 06bf7594-82cc-4f39-a568-16db6170ae64 is in state STARTED 2025-06-03 15:47:45.846391 | orchestrator | 2025-06-03 15:47:45 | INFO  | Task 02e7aabf-85e7-442f-8fe0-20c046ed8188 is in state STARTED 2025-06-03 15:47:45.846526 | orchestrator | 2025-06-03 15:47:45 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:47:48.903942 | orchestrator | 2025-06-03 15:47:48 | INFO  | Task bcbebe60-ce01-4c52-a0c4-0926bca5615d is in state STARTED 2025-06-03 15:47:48.906549 | orchestrator | 2025-06-03 15:47:48 | INFO  | Task 780b8089-27c0-4c67-875c-1dfcad8ac922 is in state SUCCESS 2025-06-03 15:47:48.908502 | orchestrator | 2025-06-03 15:47:48.908596 | orchestrator | 2025-06-03 15:47:48.908615 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-03 15:47:48.908630 | orchestrator | 2025-06-03 15:47:48.908644 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-03 15:47:48.908723 | orchestrator | Tuesday 03 June 2025 15:44:36 +0000 (0:00:00.533) 0:00:00.533 ********** 2025-06-03 15:47:48.908738 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:47:48.908810 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:47:48.908825 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:47:48.908840 | orchestrator | 2025-06-03 15:47:48.908854 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-03 15:47:48.908870 | orchestrator | Tuesday 03 June 2025 15:44:36 +0000 (0:00:00.394) 0:00:00.927 ********** 2025-06-03 15:47:48.908885 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2025-06-03 15:47:48.908900 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2025-06-03 15:47:48.908914 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2025-06-03 15:47:48.908928 | orchestrator | 2025-06-03 15:47:48.908942 | orchestrator | PLAY [Apply role designate] **************************************************** 2025-06-03 15:47:48.908956 | orchestrator | 2025-06-03 15:47:48.908970 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-06-03 15:47:48.908984 | orchestrator | Tuesday 03 June 2025 15:44:37 +0000 (0:00:00.317) 0:00:01.245 ********** 2025-06-03 15:47:48.908998 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:47:48.909014 | orchestrator | 2025-06-03 15:47:48.909027 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2025-06-03 15:47:48.909042 | orchestrator | Tuesday 03 June 2025 15:44:37 +0000 (0:00:00.446) 0:00:01.691 ********** 2025-06-03 15:47:48.909057 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2025-06-03 15:47:48.909071 | orchestrator | 2025-06-03 15:47:48.909084 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2025-06-03 15:47:48.909100 | orchestrator | Tuesday 03 June 2025 15:44:41 +0000 (0:00:03.824) 0:00:05.515 ********** 2025-06-03 15:47:48.909115 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2025-06-03 15:47:48.909152 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2025-06-03 15:47:48.909166 | orchestrator | 2025-06-03 15:47:48.909174 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2025-06-03 15:47:48.909185 | orchestrator | Tuesday 03 June 2025 15:44:47 +0000 (0:00:06.620) 0:00:12.136 ********** 2025-06-03 15:47:48.909209 | orchestrator | FAILED - RETRYING: [testbed-node-0]: designate | Creating projects (5 retries left). 2025-06-03 15:47:48.909218 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-03 15:47:48.909228 | orchestrator | 2025-06-03 15:47:48.909237 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2025-06-03 15:47:48.909246 | orchestrator | Tuesday 03 June 2025 15:45:03 +0000 (0:00:16.064) 0:00:28.200 ********** 2025-06-03 15:47:48.909255 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-03 15:47:48.909264 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2025-06-03 15:47:48.909274 | orchestrator | 2025-06-03 15:47:48.909283 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2025-06-03 15:47:48.909292 | orchestrator | Tuesday 03 June 2025 15:45:07 +0000 (0:00:03.435) 0:00:31.636 ********** 2025-06-03 15:47:48.909302 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-03 15:47:48.909331 | orchestrator | 2025-06-03 15:47:48.909341 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2025-06-03 15:47:48.909350 | orchestrator | Tuesday 03 June 2025 15:45:11 +0000 (0:00:03.778) 0:00:35.414 ********** 2025-06-03 15:47:48.909395 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2025-06-03 15:47:48.909404 | orchestrator | 2025-06-03 15:47:48.909413 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2025-06-03 15:47:48.909463 | orchestrator | Tuesday 03 June 2025 15:45:15 +0000 (0:00:04.590) 0:00:40.005 ********** 2025-06-03 15:47:48.909482 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-03 15:47:48.909573 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-03 15:47:48.909583 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-03 15:47:48.909599 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-03 15:47:48.909609 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-03 15:47:48.909626 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-03 15:47:48.909634 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-03 15:47:48.909650 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-03 15:47:48.909659 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-03 15:47:48.909668 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-03 15:47:48.909688 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-03 15:47:48.909709 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-03 15:47:48.909723 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-03 15:47:48.909735 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-03 15:47:48.909846 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-03 15:47:48.909866 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-03 15:47:48.909881 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-03 15:47:48.909904 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-03 15:47:48.909930 | orchestrator | 2025-06-03 15:47:48.909945 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2025-06-03 15:47:48.909959 | orchestrator | Tuesday 03 June 2025 15:45:19 +0000 (0:00:03.653) 0:00:43.658 ********** 2025-06-03 15:47:48.909967 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:47:48.909976 | orchestrator | 2025-06-03 15:47:48.909984 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2025-06-03 15:47:48.909992 | orchestrator | Tuesday 03 June 2025 15:45:19 +0000 (0:00:00.125) 0:00:43.784 ********** 2025-06-03 15:47:48.909999 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:47:48.910007 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:47:48.910061 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:47:48.910073 | orchestrator | 2025-06-03 15:47:48.910081 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-06-03 15:47:48.910089 | orchestrator | Tuesday 03 June 2025 15:45:19 +0000 (0:00:00.244) 0:00:44.028 ********** 2025-06-03 15:47:48.910097 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:47:48.910105 | orchestrator | 2025-06-03 15:47:48.910113 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2025-06-03 15:47:48.910121 | orchestrator | Tuesday 03 June 2025 15:45:20 +0000 (0:00:00.571) 0:00:44.600 ********** 2025-06-03 15:47:48.910152 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-03 15:47:48.910170 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-03 15:47:48.910181 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-03 15:47:48.910211 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-03 15:47:48.910226 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-03 15:47:48.910239 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-03 15:47:48.910262 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-03 15:47:48.910277 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-03 15:47:48.910291 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-03 15:47:48.910308 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-03 15:47:48.910321 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-03 15:47:48.910330 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-03 15:47:48.910338 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-03 15:47:48.910352 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-03 15:47:48.910365 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-03 15:47:48.910383 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-03 15:47:48.910412 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-03 15:47:48.910431 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-03 15:47:48.910444 | orchestrator | 2025-06-03 15:47:48.910456 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2025-06-03 15:47:48.910468 | orchestrator | Tuesday 03 June 2025 15:45:26 +0000 (0:00:06.048) 0:00:50.649 ********** 2025-06-03 15:47:48.910512 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-03 15:47:48.910535 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-03 15:47:48.910545 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-03 15:47:48.910565 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-03 15:47:48.910573 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-03 15:47:48.910586 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-03 15:47:48.910595 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:47:48.910604 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-03 15:47:48.910612 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-03 15:47:48.910625 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-03 15:47:48.910640 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-03 15:47:48.910648 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-03 15:47:48.910660 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-03 15:47:48.910668 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:47:48.910682 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-03 15:47:48.910696 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-03 15:47:48.910738 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-03 15:47:48.910765 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-03 15:47:48.910781 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-03 15:47:48.910801 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-03 15:47:48.910815 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:47:48.910830 | orchestrator | 2025-06-03 15:47:48.910839 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2025-06-03 15:47:48.910847 | orchestrator | Tuesday 03 June 2025 15:45:28 +0000 (0:00:01.780) 0:00:52.429 ********** 2025-06-03 15:47:48.910855 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-03 15:47:48.910863 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-03 15:47:48.910881 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-03 15:47:48.910902 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-03 15:47:48.910916 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-03 15:47:48.910936 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-03 15:47:48.910951 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:47:48.910959 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-03 15:47:48.910968 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-03 15:47:48.910982 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-03 15:47:48.910997 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-03 15:47:48.911005 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-03 15:47:48.911017 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-03 15:47:48.911026 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:47:48.911039 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-03 15:47:48.911052 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-03 15:47:48.911065 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-03 15:47:48.911094 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-03 15:47:48.911110 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-03 15:47:48.911123 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-03 15:47:48.911157 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:47:48.911166 | orchestrator | 2025-06-03 15:47:48.911174 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2025-06-03 15:47:48.911187 | orchestrator | Tuesday 03 June 2025 15:45:29 +0000 (0:00:01.566) 0:00:53.995 ********** 2025-06-03 15:47:48.911195 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-03 15:47:48.911204 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-03 15:47:48.911225 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-03 15:47:48.911233 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-03 15:47:48.911242 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-03 15:47:48.911255 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-03 15:47:48.911263 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-03 15:47:48.911272 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-03 15:47:48.911290 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-03 15:47:48.911298 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-03 15:47:48.911307 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-03 15:47:48.911315 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-03 15:47:48.911332 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-03 15:47:48.911340 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-03 15:47:48.911354 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-03 15:47:48.911368 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-03 15:47:48.911376 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-03 15:47:48.911384 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-03 15:47:48.911393 | orchestrator | 2025-06-03 15:47:48.911401 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2025-06-03 15:47:48.911409 | orchestrator | Tuesday 03 June 2025 15:45:36 +0000 (0:00:06.524) 0:01:00.520 ********** 2025-06-03 15:47:48.911417 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-03 15:47:48.911426 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-03 15:47:48.911440 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-03 15:47:48.911454 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-03 15:47:48.911542 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-03 15:47:48.911570 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-03 15:47:48.911584 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-03 15:47:48.911606 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-03 15:47:48.911619 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-03 15:47:48.911643 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-03 15:47:48.911657 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-03 15:47:48.911671 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-03 15:47:48.911689 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-03 15:47:48.911704 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-03 15:47:48.911726 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-03 15:47:48.911741 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-03 15:47:48.911758 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-03 15:47:48.911783 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-03 15:47:48.911811 | orchestrator | 2025-06-03 15:47:48.911827 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2025-06-03 15:47:48.911840 | orchestrator | Tuesday 03 June 2025 15:45:55 +0000 (0:00:19.594) 0:01:20.114 ********** 2025-06-03 15:47:48.911852 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-06-03 15:47:48.911865 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-06-03 15:47:48.911878 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-06-03 15:47:48.911891 | orchestrator | 2025-06-03 15:47:48.911904 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2025-06-03 15:47:48.911918 | orchestrator | Tuesday 03 June 2025 15:46:00 +0000 (0:00:04.112) 0:01:24.227 ********** 2025-06-03 15:47:48.911926 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-06-03 15:47:48.911934 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-06-03 15:47:48.911942 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-06-03 15:47:48.911950 | orchestrator | 2025-06-03 15:47:48.911958 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2025-06-03 15:47:48.911974 | orchestrator | Tuesday 03 June 2025 15:46:03 +0000 (0:00:03.267) 0:01:27.494 ********** 2025-06-03 15:47:48.911987 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-03 15:47:48.911997 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-03 15:47:48.912013 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-03 15:47:48.912022 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-03 15:47:48.912031 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-03 15:47:48.912048 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-03 15:47:48.912057 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-03 15:47:48.912065 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-03 15:47:48.912074 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-03 15:47:48.912087 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-03 15:47:48.912095 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-03 15:47:48.912103 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-03 15:47:48.912120 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-03 15:47:48.912129 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-03 15:47:48.912162 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-03 15:47:48.912175 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-03 15:47:48.912183 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-03 15:47:48.912192 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-03 15:47:48.912206 | orchestrator | 2025-06-03 15:47:48.912214 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2025-06-03 15:47:48.912222 | orchestrator | Tuesday 03 June 2025 15:46:06 +0000 (0:00:03.297) 0:01:30.792 ********** 2025-06-03 15:47:48.912234 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-03 15:47:48.912242 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-03 15:47:48.912251 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-03 15:47:48.912264 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-03 15:47:48.912273 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-03 15:47:48.912286 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-03 15:47:48.912301 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-03 15:47:48.912309 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-03 15:47:48.912317 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-03 15:47:48.912329 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-03 15:47:48.912338 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-03 15:47:48.912346 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-03 15:47:48.912364 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-03 15:47:48.912373 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-03 15:47:48.912381 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-03 15:47:48.912389 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-03 15:47:48.912402 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-03 15:47:48.912411 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-03 15:47:48.912424 | orchestrator | 2025-06-03 15:47:48.912436 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-06-03 15:47:48.912454 | orchestrator | Tuesday 03 June 2025 15:46:09 +0000 (0:00:03.101) 0:01:33.894 ********** 2025-06-03 15:47:48.912471 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:47:48.912484 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:47:48.912496 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:47:48.912509 | orchestrator | 2025-06-03 15:47:48.912522 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2025-06-03 15:47:48.912534 | orchestrator | Tuesday 03 June 2025 15:46:10 +0000 (0:00:00.510) 0:01:34.404 ********** 2025-06-03 15:47:48.912555 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-03 15:47:48.912569 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-03 15:47:48.912583 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-03 15:47:48.912596 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-03 15:47:48.912691 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-03 15:47:48.912711 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-03 15:47:48.912720 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:47:48.912734 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-03 15:47:48.912743 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-03 15:47:48.912751 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-03 15:47:48.912759 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-03 15:47:48.912774 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-03 15:47:48.912788 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-03 15:47:48.912796 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:47:48.912805 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-03 15:47:48.912817 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-03 15:47:48.912826 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-03 15:47:48.912834 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-03 15:47:48.912846 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-03 15:47:48.912860 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-03 15:47:48.912868 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:47:48.912877 | orchestrator | 2025-06-03 15:47:48.912885 | orchestrator | TASK [designate : Check designate containers] ********************************** 2025-06-03 15:47:48.912893 | orchestrator | Tuesday 03 June 2025 15:46:11 +0000 (0:00:01.060) 0:01:35.464 ********** 2025-06-03 15:47:48.912901 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-03 15:47:48.912913 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-03 15:47:48.912922 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-03 15:47:48.912935 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-03 15:47:48.912948 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-03 15:47:48.912956 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-03 15:47:48.912968 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-03 15:47:48.912977 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-03 15:47:48.912985 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-03 15:47:48.912994 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-03 15:47:48.913013 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-03 15:47:48.913021 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-03 15:47:48.913029 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-03 15:47:48.913041 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-03 15:47:48.913050 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-03 15:47:48.913058 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-03 15:47:48.913067 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-03 15:47:48.913088 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-03 15:47:48.913097 | orchestrator | 2025-06-03 15:47:48.913105 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-06-03 15:47:48.913113 | orchestrator | Tuesday 03 June 2025 15:46:16 +0000 (0:00:05.128) 0:01:40.593 ********** 2025-06-03 15:47:48.913121 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:47:48.913129 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:47:48.913181 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:47:48.913195 | orchestrator | 2025-06-03 15:47:48.913208 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2025-06-03 15:47:48.913222 | orchestrator | Tuesday 03 June 2025 15:46:17 +0000 (0:00:00.764) 0:01:41.357 ********** 2025-06-03 15:47:48.913231 | orchestrator | changed: [testbed-node-0] => (item=designate) 2025-06-03 15:47:48.913239 | orchestrator | 2025-06-03 15:47:48.913247 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2025-06-03 15:47:48.913254 | orchestrator | Tuesday 03 June 2025 15:46:20 +0000 (0:00:03.502) 0:01:44.860 ********** 2025-06-03 15:47:48.913262 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-03 15:47:48.913270 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2025-06-03 15:47:48.913278 | orchestrator | 2025-06-03 15:47:48.913286 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2025-06-03 15:47:48.913294 | orchestrator | Tuesday 03 June 2025 15:46:22 +0000 (0:00:02.310) 0:01:47.170 ********** 2025-06-03 15:47:48.913301 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:47:48.913309 | orchestrator | 2025-06-03 15:47:48.913317 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-06-03 15:47:48.913325 | orchestrator | Tuesday 03 June 2025 15:46:38 +0000 (0:00:15.557) 0:02:02.728 ********** 2025-06-03 15:47:48.913332 | orchestrator | 2025-06-03 15:47:48.913341 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-06-03 15:47:48.913348 | orchestrator | Tuesday 03 June 2025 15:46:38 +0000 (0:00:00.085) 0:02:02.814 ********** 2025-06-03 15:47:48.913356 | orchestrator | 2025-06-03 15:47:48.913364 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-06-03 15:47:48.913372 | orchestrator | Tuesday 03 June 2025 15:46:38 +0000 (0:00:00.066) 0:02:02.881 ********** 2025-06-03 15:47:48.913380 | orchestrator | 2025-06-03 15:47:48.913388 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2025-06-03 15:47:48.913395 | orchestrator | Tuesday 03 June 2025 15:46:38 +0000 (0:00:00.082) 0:02:02.964 ********** 2025-06-03 15:47:48.913403 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:47:48.913416 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:47:48.913424 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:47:48.913432 | orchestrator | 2025-06-03 15:47:48.913440 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2025-06-03 15:47:48.913447 | orchestrator | Tuesday 03 June 2025 15:46:55 +0000 (0:00:16.444) 0:02:19.408 ********** 2025-06-03 15:47:48.913465 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:47:48.913473 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:47:48.913481 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:47:48.913489 | orchestrator | 2025-06-03 15:47:48.913497 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2025-06-03 15:47:48.913505 | orchestrator | Tuesday 03 June 2025 15:47:08 +0000 (0:00:12.925) 0:02:32.334 ********** 2025-06-03 15:47:48.913512 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:47:48.913520 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:47:48.913528 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:47:48.913535 | orchestrator | 2025-06-03 15:47:48.913543 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2025-06-03 15:47:48.913551 | orchestrator | Tuesday 03 June 2025 15:47:15 +0000 (0:00:07.228) 0:02:39.563 ********** 2025-06-03 15:47:48.913559 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:47:48.913567 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:47:48.913575 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:47:48.913583 | orchestrator | 2025-06-03 15:47:48.913590 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2025-06-03 15:47:48.913598 | orchestrator | Tuesday 03 June 2025 15:47:21 +0000 (0:00:05.893) 0:02:45.456 ********** 2025-06-03 15:47:48.913606 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:47:48.913614 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:47:48.913622 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:47:48.913629 | orchestrator | 2025-06-03 15:47:48.913637 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2025-06-03 15:47:48.913645 | orchestrator | Tuesday 03 June 2025 15:47:29 +0000 (0:00:08.565) 0:02:54.022 ********** 2025-06-03 15:47:48.913652 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:47:48.913660 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:47:48.913668 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:47:48.913676 | orchestrator | 2025-06-03 15:47:48.913683 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2025-06-03 15:47:48.913692 | orchestrator | Tuesday 03 June 2025 15:47:38 +0000 (0:00:08.971) 0:03:02.993 ********** 2025-06-03 15:47:48.913700 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:47:48.913707 | orchestrator | 2025-06-03 15:47:48.913715 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-03 15:47:48.913724 | orchestrator | testbed-node-0 : ok=29  changed=23  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-06-03 15:47:48.913732 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-03 15:47:48.913746 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-03 15:47:48.913754 | orchestrator | 2025-06-03 15:47:48.913762 | orchestrator | 2025-06-03 15:47:48.913770 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-03 15:47:48.913778 | orchestrator | Tuesday 03 June 2025 15:47:46 +0000 (0:00:07.395) 0:03:10.389 ********** 2025-06-03 15:47:48.913786 | orchestrator | =============================================================================== 2025-06-03 15:47:48.913793 | orchestrator | designate : Copying over designate.conf -------------------------------- 19.59s 2025-06-03 15:47:48.913801 | orchestrator | designate : Restart designate-backend-bind9 container ------------------ 16.44s 2025-06-03 15:47:48.913809 | orchestrator | service-ks-register : designate | Creating projects -------------------- 16.06s 2025-06-03 15:47:48.913817 | orchestrator | designate : Running Designate bootstrap container ---------------------- 15.56s 2025-06-03 15:47:48.913825 | orchestrator | designate : Restart designate-api container ---------------------------- 12.93s 2025-06-03 15:47:48.913833 | orchestrator | designate : Restart designate-worker container -------------------------- 8.97s 2025-06-03 15:47:48.913841 | orchestrator | designate : Restart designate-mdns container ---------------------------- 8.57s 2025-06-03 15:47:48.913856 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 7.40s 2025-06-03 15:47:48.913865 | orchestrator | designate : Restart designate-central container ------------------------- 7.23s 2025-06-03 15:47:48.913873 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 6.62s 2025-06-03 15:47:48.913881 | orchestrator | designate : Copying over config.json files for services ----------------- 6.52s 2025-06-03 15:47:48.913889 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 6.05s 2025-06-03 15:47:48.913897 | orchestrator | designate : Restart designate-producer container ------------------------ 5.89s 2025-06-03 15:47:48.913905 | orchestrator | designate : Check designate containers ---------------------------------- 5.13s 2025-06-03 15:47:48.913913 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 4.59s 2025-06-03 15:47:48.913921 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 4.11s 2025-06-03 15:47:48.913928 | orchestrator | service-ks-register : designate | Creating services --------------------- 3.82s 2025-06-03 15:47:48.913936 | orchestrator | service-ks-register : designate | Creating roles ------------------------ 3.78s 2025-06-03 15:47:48.913944 | orchestrator | designate : Ensuring config directories exist --------------------------- 3.65s 2025-06-03 15:47:48.913952 | orchestrator | designate : Creating Designate databases -------------------------------- 3.50s 2025-06-03 15:47:48.913965 | orchestrator | 2025-06-03 15:47:48 | INFO  | Task 400485f3-2769-46f4-9849-939b73c51b8d is in state STARTED 2025-06-03 15:47:48.913973 | orchestrator | 2025-06-03 15:47:48 | INFO  | Task 06bf7594-82cc-4f39-a568-16db6170ae64 is in state STARTED 2025-06-03 15:47:48.913981 | orchestrator | 2025-06-03 15:47:48 | INFO  | Task 02e7aabf-85e7-442f-8fe0-20c046ed8188 is in state STARTED 2025-06-03 15:47:48.913990 | orchestrator | 2025-06-03 15:47:48 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:47:51.963409 | orchestrator | 2025-06-03 15:47:51 | INFO  | Task bcbebe60-ce01-4c52-a0c4-0926bca5615d is in state STARTED 2025-06-03 15:47:51.968309 | orchestrator | 2025-06-03 15:47:51 | INFO  | Task 400485f3-2769-46f4-9849-939b73c51b8d is in state STARTED 2025-06-03 15:47:51.971858 | orchestrator | 2025-06-03 15:47:51 | INFO  | Task 06bf7594-82cc-4f39-a568-16db6170ae64 is in state STARTED 2025-06-03 15:47:51.975190 | orchestrator | 2025-06-03 15:47:51 | INFO  | Task 02e7aabf-85e7-442f-8fe0-20c046ed8188 is in state STARTED 2025-06-03 15:47:51.975427 | orchestrator | 2025-06-03 15:47:51 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:47:55.004874 | orchestrator | 2025-06-03 15:47:55 | INFO  | Task bcbebe60-ce01-4c52-a0c4-0926bca5615d is in state SUCCESS 2025-06-03 15:47:55.010937 | orchestrator | 2025-06-03 15:47:55 | INFO  | Task b09c2362-48b4-42a1-9ec5-64485050a7df is in state STARTED 2025-06-03 15:47:55.014720 | orchestrator | 2025-06-03 15:47:55 | INFO  | Task 400485f3-2769-46f4-9849-939b73c51b8d is in state STARTED 2025-06-03 15:47:55.016336 | orchestrator | 2025-06-03 15:47:55 | INFO  | Task 06bf7594-82cc-4f39-a568-16db6170ae64 is in state STARTED 2025-06-03 15:47:55.017432 | orchestrator | 2025-06-03 15:47:55 | INFO  | Task 02e7aabf-85e7-442f-8fe0-20c046ed8188 is in state STARTED 2025-06-03 15:47:55.018288 | orchestrator | 2025-06-03 15:47:55 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:47:58.054393 | orchestrator | 2025-06-03 15:47:58 | INFO  | Task b09c2362-48b4-42a1-9ec5-64485050a7df is in state STARTED 2025-06-03 15:47:58.054521 | orchestrator | 2025-06-03 15:47:58 | INFO  | Task 400485f3-2769-46f4-9849-939b73c51b8d is in state STARTED 2025-06-03 15:47:58.055389 | orchestrator | 2025-06-03 15:47:58 | INFO  | Task 06bf7594-82cc-4f39-a568-16db6170ae64 is in state STARTED 2025-06-03 15:47:58.055949 | orchestrator | 2025-06-03 15:47:58 | INFO  | Task 02e7aabf-85e7-442f-8fe0-20c046ed8188 is in state STARTED 2025-06-03 15:47:58.055977 | orchestrator | 2025-06-03 15:47:58 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:48:01.084748 | orchestrator | 2025-06-03 15:48:01 | INFO  | Task b09c2362-48b4-42a1-9ec5-64485050a7df is in state STARTED 2025-06-03 15:48:01.084853 | orchestrator | 2025-06-03 15:48:01 | INFO  | Task 400485f3-2769-46f4-9849-939b73c51b8d is in state STARTED 2025-06-03 15:48:01.084868 | orchestrator | 2025-06-03 15:48:01 | INFO  | Task 06bf7594-82cc-4f39-a568-16db6170ae64 is in state STARTED 2025-06-03 15:48:01.085247 | orchestrator | 2025-06-03 15:48:01 | INFO  | Task 02e7aabf-85e7-442f-8fe0-20c046ed8188 is in state STARTED 2025-06-03 15:48:01.085268 | orchestrator | 2025-06-03 15:48:01 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:48:04.131954 | orchestrator | 2025-06-03 15:48:04 | INFO  | Task b09c2362-48b4-42a1-9ec5-64485050a7df is in state STARTED 2025-06-03 15:48:04.133952 | orchestrator | 2025-06-03 15:48:04 | INFO  | Task 400485f3-2769-46f4-9849-939b73c51b8d is in state STARTED 2025-06-03 15:48:04.135617 | orchestrator | 2025-06-03 15:48:04 | INFO  | Task 06bf7594-82cc-4f39-a568-16db6170ae64 is in state STARTED 2025-06-03 15:48:04.137146 | orchestrator | 2025-06-03 15:48:04 | INFO  | Task 02e7aabf-85e7-442f-8fe0-20c046ed8188 is in state STARTED 2025-06-03 15:48:04.137170 | orchestrator | 2025-06-03 15:48:04 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:48:07.180586 | orchestrator | 2025-06-03 15:48:07 | INFO  | Task b09c2362-48b4-42a1-9ec5-64485050a7df is in state STARTED 2025-06-03 15:48:07.181287 | orchestrator | 2025-06-03 15:48:07 | INFO  | Task 400485f3-2769-46f4-9849-939b73c51b8d is in state STARTED 2025-06-03 15:48:07.182688 | orchestrator | 2025-06-03 15:48:07 | INFO  | Task 06bf7594-82cc-4f39-a568-16db6170ae64 is in state STARTED 2025-06-03 15:48:07.183865 | orchestrator | 2025-06-03 15:48:07 | INFO  | Task 02e7aabf-85e7-442f-8fe0-20c046ed8188 is in state STARTED 2025-06-03 15:48:07.183893 | orchestrator | 2025-06-03 15:48:07 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:48:10.233448 | orchestrator | 2025-06-03 15:48:10 | INFO  | Task b09c2362-48b4-42a1-9ec5-64485050a7df is in state STARTED 2025-06-03 15:48:10.234610 | orchestrator | 2025-06-03 15:48:10 | INFO  | Task 400485f3-2769-46f4-9849-939b73c51b8d is in state STARTED 2025-06-03 15:48:10.237457 | orchestrator | 2025-06-03 15:48:10 | INFO  | Task 06bf7594-82cc-4f39-a568-16db6170ae64 is in state STARTED 2025-06-03 15:48:10.239757 | orchestrator | 2025-06-03 15:48:10 | INFO  | Task 02e7aabf-85e7-442f-8fe0-20c046ed8188 is in state STARTED 2025-06-03 15:48:10.240180 | orchestrator | 2025-06-03 15:48:10 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:48:13.279941 | orchestrator | 2025-06-03 15:48:13 | INFO  | Task b09c2362-48b4-42a1-9ec5-64485050a7df is in state STARTED 2025-06-03 15:48:13.281956 | orchestrator | 2025-06-03 15:48:13 | INFO  | Task 400485f3-2769-46f4-9849-939b73c51b8d is in state STARTED 2025-06-03 15:48:13.284583 | orchestrator | 2025-06-03 15:48:13 | INFO  | Task 06bf7594-82cc-4f39-a568-16db6170ae64 is in state STARTED 2025-06-03 15:48:13.288347 | orchestrator | 2025-06-03 15:48:13 | INFO  | Task 02e7aabf-85e7-442f-8fe0-20c046ed8188 is in state STARTED 2025-06-03 15:48:13.288453 | orchestrator | 2025-06-03 15:48:13 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:48:16.336972 | orchestrator | 2025-06-03 15:48:16 | INFO  | Task b09c2362-48b4-42a1-9ec5-64485050a7df is in state STARTED 2025-06-03 15:48:16.339293 | orchestrator | 2025-06-03 15:48:16 | INFO  | Task 400485f3-2769-46f4-9849-939b73c51b8d is in state STARTED 2025-06-03 15:48:16.341196 | orchestrator | 2025-06-03 15:48:16 | INFO  | Task 06bf7594-82cc-4f39-a568-16db6170ae64 is in state STARTED 2025-06-03 15:48:16.342581 | orchestrator | 2025-06-03 15:48:16 | INFO  | Task 02e7aabf-85e7-442f-8fe0-20c046ed8188 is in state STARTED 2025-06-03 15:48:16.342816 | orchestrator | 2025-06-03 15:48:16 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:48:19.382505 | orchestrator | 2025-06-03 15:48:19 | INFO  | Task b09c2362-48b4-42a1-9ec5-64485050a7df is in state STARTED 2025-06-03 15:48:19.385132 | orchestrator | 2025-06-03 15:48:19 | INFO  | Task 400485f3-2769-46f4-9849-939b73c51b8d is in state STARTED 2025-06-03 15:48:19.387420 | orchestrator | 2025-06-03 15:48:19 | INFO  | Task 06bf7594-82cc-4f39-a568-16db6170ae64 is in state STARTED 2025-06-03 15:48:19.389903 | orchestrator | 2025-06-03 15:48:19 | INFO  | Task 02e7aabf-85e7-442f-8fe0-20c046ed8188 is in state STARTED 2025-06-03 15:48:19.389961 | orchestrator | 2025-06-03 15:48:19 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:48:22.425798 | orchestrator | 2025-06-03 15:48:22 | INFO  | Task b09c2362-48b4-42a1-9ec5-64485050a7df is in state STARTED 2025-06-03 15:48:22.426512 | orchestrator | 2025-06-03 15:48:22 | INFO  | Task 400485f3-2769-46f4-9849-939b73c51b8d is in state STARTED 2025-06-03 15:48:22.429380 | orchestrator | 2025-06-03 15:48:22 | INFO  | Task 06bf7594-82cc-4f39-a568-16db6170ae64 is in state STARTED 2025-06-03 15:48:22.430658 | orchestrator | 2025-06-03 15:48:22 | INFO  | Task 02e7aabf-85e7-442f-8fe0-20c046ed8188 is in state STARTED 2025-06-03 15:48:22.430714 | orchestrator | 2025-06-03 15:48:22 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:48:25.489338 | orchestrator | 2025-06-03 15:48:25 | INFO  | Task b09c2362-48b4-42a1-9ec5-64485050a7df is in state STARTED 2025-06-03 15:48:25.489758 | orchestrator | 2025-06-03 15:48:25 | INFO  | Task 400485f3-2769-46f4-9849-939b73c51b8d is in state STARTED 2025-06-03 15:48:25.490627 | orchestrator | 2025-06-03 15:48:25 | INFO  | Task 06bf7594-82cc-4f39-a568-16db6170ae64 is in state STARTED 2025-06-03 15:48:25.491612 | orchestrator | 2025-06-03 15:48:25 | INFO  | Task 02e7aabf-85e7-442f-8fe0-20c046ed8188 is in state STARTED 2025-06-03 15:48:25.491651 | orchestrator | 2025-06-03 15:48:25 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:48:28.548614 | orchestrator | 2025-06-03 15:48:28 | INFO  | Task b09c2362-48b4-42a1-9ec5-64485050a7df is in state STARTED 2025-06-03 15:48:28.549360 | orchestrator | 2025-06-03 15:48:28 | INFO  | Task 400485f3-2769-46f4-9849-939b73c51b8d is in state STARTED 2025-06-03 15:48:28.549821 | orchestrator | 2025-06-03 15:48:28 | INFO  | Task 06bf7594-82cc-4f39-a568-16db6170ae64 is in state STARTED 2025-06-03 15:48:28.552299 | orchestrator | 2025-06-03 15:48:28 | INFO  | Task 02e7aabf-85e7-442f-8fe0-20c046ed8188 is in state STARTED 2025-06-03 15:48:28.552361 | orchestrator | 2025-06-03 15:48:28 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:48:31.584946 | orchestrator | 2025-06-03 15:48:31 | INFO  | Task b09c2362-48b4-42a1-9ec5-64485050a7df is in state STARTED 2025-06-03 15:48:31.585199 | orchestrator | 2025-06-03 15:48:31 | INFO  | Task 400485f3-2769-46f4-9849-939b73c51b8d is in state STARTED 2025-06-03 15:48:31.585441 | orchestrator | 2025-06-03 15:48:31 | INFO  | Task 06bf7594-82cc-4f39-a568-16db6170ae64 is in state STARTED 2025-06-03 15:48:31.586079 | orchestrator | 2025-06-03 15:48:31 | INFO  | Task 02e7aabf-85e7-442f-8fe0-20c046ed8188 is in state STARTED 2025-06-03 15:48:31.586134 | orchestrator | 2025-06-03 15:48:31 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:48:34.614794 | orchestrator | 2025-06-03 15:48:34 | INFO  | Task b09c2362-48b4-42a1-9ec5-64485050a7df is in state STARTED 2025-06-03 15:48:34.614887 | orchestrator | 2025-06-03 15:48:34 | INFO  | Task 400485f3-2769-46f4-9849-939b73c51b8d is in state STARTED 2025-06-03 15:48:34.615232 | orchestrator | 2025-06-03 15:48:34 | INFO  | Task 06bf7594-82cc-4f39-a568-16db6170ae64 is in state STARTED 2025-06-03 15:48:34.616017 | orchestrator | 2025-06-03 15:48:34 | INFO  | Task 02e7aabf-85e7-442f-8fe0-20c046ed8188 is in state STARTED 2025-06-03 15:48:34.616159 | orchestrator | 2025-06-03 15:48:34 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:48:37.645788 | orchestrator | 2025-06-03 15:48:37 | INFO  | Task b09c2362-48b4-42a1-9ec5-64485050a7df is in state STARTED 2025-06-03 15:48:37.648603 | orchestrator | 2025-06-03 15:48:37 | INFO  | Task 400485f3-2769-46f4-9849-939b73c51b8d is in state STARTED 2025-06-03 15:48:37.648755 | orchestrator | 2025-06-03 15:48:37 | INFO  | Task 06bf7594-82cc-4f39-a568-16db6170ae64 is in state STARTED 2025-06-03 15:48:37.649797 | orchestrator | 2025-06-03 15:48:37 | INFO  | Task 02e7aabf-85e7-442f-8fe0-20c046ed8188 is in state STARTED 2025-06-03 15:48:37.650416 | orchestrator | 2025-06-03 15:48:37 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:48:40.690965 | orchestrator | 2025-06-03 15:48:40 | INFO  | Task b09c2362-48b4-42a1-9ec5-64485050a7df is in state STARTED 2025-06-03 15:48:40.694919 | orchestrator | 2025-06-03 15:48:40 | INFO  | Task 400485f3-2769-46f4-9849-939b73c51b8d is in state STARTED 2025-06-03 15:48:40.700239 | orchestrator | 2025-06-03 15:48:40 | INFO  | Task 06bf7594-82cc-4f39-a568-16db6170ae64 is in state STARTED 2025-06-03 15:48:40.700342 | orchestrator | 2025-06-03 15:48:40 | INFO  | Task 02e7aabf-85e7-442f-8fe0-20c046ed8188 is in state STARTED 2025-06-03 15:48:40.702874 | orchestrator | 2025-06-03 15:48:40 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:48:43.730669 | orchestrator | 2025-06-03 15:48:43 | INFO  | Task b09c2362-48b4-42a1-9ec5-64485050a7df is in state STARTED 2025-06-03 15:48:43.731888 | orchestrator | 2025-06-03 15:48:43 | INFO  | Task 400485f3-2769-46f4-9849-939b73c51b8d is in state STARTED 2025-06-03 15:48:43.732716 | orchestrator | 2025-06-03 15:48:43 | INFO  | Task 06bf7594-82cc-4f39-a568-16db6170ae64 is in state STARTED 2025-06-03 15:48:43.733460 | orchestrator | 2025-06-03 15:48:43 | INFO  | Task 02e7aabf-85e7-442f-8fe0-20c046ed8188 is in state STARTED 2025-06-03 15:48:43.733476 | orchestrator | 2025-06-03 15:48:43 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:48:46.777822 | orchestrator | 2025-06-03 15:48:46 | INFO  | Task b09c2362-48b4-42a1-9ec5-64485050a7df is in state STARTED 2025-06-03 15:48:46.779810 | orchestrator | 2025-06-03 15:48:46 | INFO  | Task 400485f3-2769-46f4-9849-939b73c51b8d is in state STARTED 2025-06-03 15:48:46.784445 | orchestrator | 2025-06-03 15:48:46 | INFO  | Task 06bf7594-82cc-4f39-a568-16db6170ae64 is in state STARTED 2025-06-03 15:48:46.784504 | orchestrator | 2025-06-03 15:48:46 | INFO  | Task 02e7aabf-85e7-442f-8fe0-20c046ed8188 is in state STARTED 2025-06-03 15:48:46.784517 | orchestrator | 2025-06-03 15:48:46 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:48:49.818992 | orchestrator | 2025-06-03 15:48:49 | INFO  | Task b09c2362-48b4-42a1-9ec5-64485050a7df is in state STARTED 2025-06-03 15:48:49.822263 | orchestrator | 2025-06-03 15:48:49 | INFO  | Task 400485f3-2769-46f4-9849-939b73c51b8d is in state STARTED 2025-06-03 15:48:49.822491 | orchestrator | 2025-06-03 15:48:49 | INFO  | Task 06bf7594-82cc-4f39-a568-16db6170ae64 is in state STARTED 2025-06-03 15:48:49.822920 | orchestrator | 2025-06-03 15:48:49 | INFO  | Task 02e7aabf-85e7-442f-8fe0-20c046ed8188 is in state STARTED 2025-06-03 15:48:49.822970 | orchestrator | 2025-06-03 15:48:49 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:48:52.851918 | orchestrator | 2025-06-03 15:48:52 | INFO  | Task b09c2362-48b4-42a1-9ec5-64485050a7df is in state STARTED 2025-06-03 15:48:52.854199 | orchestrator | 2025-06-03 15:48:52 | INFO  | Task 400485f3-2769-46f4-9849-939b73c51b8d is in state STARTED 2025-06-03 15:48:52.855992 | orchestrator | 2025-06-03 15:48:52 | INFO  | Task 06bf7594-82cc-4f39-a568-16db6170ae64 is in state STARTED 2025-06-03 15:48:52.857719 | orchestrator | 2025-06-03 15:48:52 | INFO  | Task 02e7aabf-85e7-442f-8fe0-20c046ed8188 is in state STARTED 2025-06-03 15:48:52.857770 | orchestrator | 2025-06-03 15:48:52 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:48:55.901386 | orchestrator | 2025-06-03 15:48:55 | INFO  | Task b09c2362-48b4-42a1-9ec5-64485050a7df is in state STARTED 2025-06-03 15:48:55.902606 | orchestrator | 2025-06-03 15:48:55 | INFO  | Task 400485f3-2769-46f4-9849-939b73c51b8d is in state STARTED 2025-06-03 15:48:55.903971 | orchestrator | 2025-06-03 15:48:55 | INFO  | Task 06bf7594-82cc-4f39-a568-16db6170ae64 is in state STARTED 2025-06-03 15:48:55.907286 | orchestrator | 2025-06-03 15:48:55 | INFO  | Task 02e7aabf-85e7-442f-8fe0-20c046ed8188 is in state STARTED 2025-06-03 15:48:55.907360 | orchestrator | 2025-06-03 15:48:55 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:48:58.955925 | orchestrator | 2025-06-03 15:48:58 | INFO  | Task b09c2362-48b4-42a1-9ec5-64485050a7df is in state STARTED 2025-06-03 15:48:58.958886 | orchestrator | 2025-06-03 15:48:58 | INFO  | Task 400485f3-2769-46f4-9849-939b73c51b8d is in state STARTED 2025-06-03 15:48:58.961961 | orchestrator | 2025-06-03 15:48:58 | INFO  | Task 06bf7594-82cc-4f39-a568-16db6170ae64 is in state STARTED 2025-06-03 15:48:58.964152 | orchestrator | 2025-06-03 15:48:58 | INFO  | Task 02e7aabf-85e7-442f-8fe0-20c046ed8188 is in state STARTED 2025-06-03 15:48:58.964428 | orchestrator | 2025-06-03 15:48:58 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:49:02.007681 | orchestrator | 2025-06-03 15:49:02 | INFO  | Task b09c2362-48b4-42a1-9ec5-64485050a7df is in state STARTED 2025-06-03 15:49:02.009150 | orchestrator | 2025-06-03 15:49:02 | INFO  | Task 400485f3-2769-46f4-9849-939b73c51b8d is in state STARTED 2025-06-03 15:49:02.011516 | orchestrator | 2025-06-03 15:49:02 | INFO  | Task 06bf7594-82cc-4f39-a568-16db6170ae64 is in state STARTED 2025-06-03 15:49:02.012868 | orchestrator | 2025-06-03 15:49:02 | INFO  | Task 02e7aabf-85e7-442f-8fe0-20c046ed8188 is in state STARTED 2025-06-03 15:49:02.013524 | orchestrator | 2025-06-03 15:49:02 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:49:05.050679 | orchestrator | 2025-06-03 15:49:05 | INFO  | Task b09c2362-48b4-42a1-9ec5-64485050a7df is in state STARTED 2025-06-03 15:49:05.050995 | orchestrator | 2025-06-03 15:49:05 | INFO  | Task 400485f3-2769-46f4-9849-939b73c51b8d is in state STARTED 2025-06-03 15:49:05.051966 | orchestrator | 2025-06-03 15:49:05 | INFO  | Task 06bf7594-82cc-4f39-a568-16db6170ae64 is in state STARTED 2025-06-03 15:49:05.053392 | orchestrator | 2025-06-03 15:49:05 | INFO  | Task 02e7aabf-85e7-442f-8fe0-20c046ed8188 is in state STARTED 2025-06-03 15:49:05.054627 | orchestrator | 2025-06-03 15:49:05 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:49:08.104819 | orchestrator | 2025-06-03 15:49:08 | INFO  | Task b09c2362-48b4-42a1-9ec5-64485050a7df is in state STARTED 2025-06-03 15:49:08.104908 | orchestrator | 2025-06-03 15:49:08 | INFO  | Task 400485f3-2769-46f4-9849-939b73c51b8d is in state STARTED 2025-06-03 15:49:08.107206 | orchestrator | 2025-06-03 15:49:08 | INFO  | Task 06bf7594-82cc-4f39-a568-16db6170ae64 is in state STARTED 2025-06-03 15:49:08.109955 | orchestrator | 2025-06-03 15:49:08 | INFO  | Task 02e7aabf-85e7-442f-8fe0-20c046ed8188 is in state STARTED 2025-06-03 15:49:08.110098 | orchestrator | 2025-06-03 15:49:08 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:49:11.144644 | orchestrator | 2025-06-03 15:49:11 | INFO  | Task b09c2362-48b4-42a1-9ec5-64485050a7df is in state STARTED 2025-06-03 15:49:11.146207 | orchestrator | 2025-06-03 15:49:11 | INFO  | Task 400485f3-2769-46f4-9849-939b73c51b8d is in state STARTED 2025-06-03 15:49:11.147249 | orchestrator | 2025-06-03 15:49:11 | INFO  | Task 06bf7594-82cc-4f39-a568-16db6170ae64 is in state STARTED 2025-06-03 15:49:11.148388 | orchestrator | 2025-06-03 15:49:11 | INFO  | Task 02e7aabf-85e7-442f-8fe0-20c046ed8188 is in state STARTED 2025-06-03 15:49:11.148417 | orchestrator | 2025-06-03 15:49:11 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:49:14.183139 | orchestrator | 2025-06-03 15:49:14 | INFO  | Task b09c2362-48b4-42a1-9ec5-64485050a7df is in state STARTED 2025-06-03 15:49:14.183226 | orchestrator | 2025-06-03 15:49:14 | INFO  | Task 400485f3-2769-46f4-9849-939b73c51b8d is in state STARTED 2025-06-03 15:49:14.184181 | orchestrator | 2025-06-03 15:49:14 | INFO  | Task 06bf7594-82cc-4f39-a568-16db6170ae64 is in state STARTED 2025-06-03 15:49:14.185631 | orchestrator | 2025-06-03 15:49:14 | INFO  | Task 02e7aabf-85e7-442f-8fe0-20c046ed8188 is in state STARTED 2025-06-03 15:49:14.185660 | orchestrator | 2025-06-03 15:49:14 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:49:17.226205 | orchestrator | 2025-06-03 15:49:17 | INFO  | Task b09c2362-48b4-42a1-9ec5-64485050a7df is in state STARTED 2025-06-03 15:49:17.227804 | orchestrator | 2025-06-03 15:49:17 | INFO  | Task 400485f3-2769-46f4-9849-939b73c51b8d is in state STARTED 2025-06-03 15:49:17.229308 | orchestrator | 2025-06-03 15:49:17 | INFO  | Task 06bf7594-82cc-4f39-a568-16db6170ae64 is in state STARTED 2025-06-03 15:49:17.230489 | orchestrator | 2025-06-03 15:49:17 | INFO  | Task 02e7aabf-85e7-442f-8fe0-20c046ed8188 is in state STARTED 2025-06-03 15:49:17.230582 | orchestrator | 2025-06-03 15:49:17 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:49:20.265240 | orchestrator | 2025-06-03 15:49:20 | INFO  | Task b09c2362-48b4-42a1-9ec5-64485050a7df is in state STARTED 2025-06-03 15:49:20.265873 | orchestrator | 2025-06-03 15:49:20 | INFO  | Task 400485f3-2769-46f4-9849-939b73c51b8d is in state STARTED 2025-06-03 15:49:20.267211 | orchestrator | 2025-06-03 15:49:20 | INFO  | Task 06bf7594-82cc-4f39-a568-16db6170ae64 is in state STARTED 2025-06-03 15:49:20.268426 | orchestrator | 2025-06-03 15:49:20 | INFO  | Task 02e7aabf-85e7-442f-8fe0-20c046ed8188 is in state STARTED 2025-06-03 15:49:20.268466 | orchestrator | 2025-06-03 15:49:20 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:49:23.308127 | orchestrator | 2025-06-03 15:49:23 | INFO  | Task b09c2362-48b4-42a1-9ec5-64485050a7df is in state STARTED 2025-06-03 15:49:23.310300 | orchestrator | 2025-06-03 15:49:23 | INFO  | Task 400485f3-2769-46f4-9849-939b73c51b8d is in state STARTED 2025-06-03 15:49:23.312633 | orchestrator | 2025-06-03 15:49:23 | INFO  | Task 06bf7594-82cc-4f39-a568-16db6170ae64 is in state STARTED 2025-06-03 15:49:23.314710 | orchestrator | 2025-06-03 15:49:23 | INFO  | Task 02e7aabf-85e7-442f-8fe0-20c046ed8188 is in state STARTED 2025-06-03 15:49:23.314841 | orchestrator | 2025-06-03 15:49:23 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:49:26.353746 | orchestrator | 2025-06-03 15:49:26 | INFO  | Task b09c2362-48b4-42a1-9ec5-64485050a7df is in state STARTED 2025-06-03 15:49:26.354009 | orchestrator | 2025-06-03 15:49:26 | INFO  | Task 88f15d3f-7f29-447d-a26c-c055d2bc5000 is in state STARTED 2025-06-03 15:49:26.356688 | orchestrator | 2025-06-03 15:49:26 | INFO  | Task 400485f3-2769-46f4-9849-939b73c51b8d is in state SUCCESS 2025-06-03 15:49:26.360856 | orchestrator | 2025-06-03 15:49:26.360920 | orchestrator | 2025-06-03 15:49:26.360928 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-03 15:49:26.360937 | orchestrator | 2025-06-03 15:49:26.360944 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-03 15:49:26.360952 | orchestrator | Tuesday 03 June 2025 15:47:50 +0000 (0:00:00.184) 0:00:00.184 ********** 2025-06-03 15:49:26.360959 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:49:26.360966 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:49:26.360972 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:49:26.360979 | orchestrator | 2025-06-03 15:49:26.360985 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-03 15:49:26.360991 | orchestrator | Tuesday 03 June 2025 15:47:50 +0000 (0:00:00.297) 0:00:00.481 ********** 2025-06-03 15:49:26.360998 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-06-03 15:49:26.361005 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-06-03 15:49:26.361012 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-06-03 15:49:26.361018 | orchestrator | 2025-06-03 15:49:26.361025 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2025-06-03 15:49:26.361109 | orchestrator | 2025-06-03 15:49:26.361119 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2025-06-03 15:49:26.361126 | orchestrator | Tuesday 03 June 2025 15:47:51 +0000 (0:00:00.637) 0:00:01.119 ********** 2025-06-03 15:49:26.361133 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:49:26.361139 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:49:26.361145 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:49:26.361152 | orchestrator | 2025-06-03 15:49:26.361159 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-03 15:49:26.361166 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-03 15:49:26.361175 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-03 15:49:26.361182 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-03 15:49:26.361189 | orchestrator | 2025-06-03 15:49:26.361195 | orchestrator | 2025-06-03 15:49:26.361202 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-03 15:49:26.361208 | orchestrator | Tuesday 03 June 2025 15:47:52 +0000 (0:00:00.651) 0:00:01.770 ********** 2025-06-03 15:49:26.361232 | orchestrator | =============================================================================== 2025-06-03 15:49:26.361238 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 0.65s 2025-06-03 15:49:26.361245 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.64s 2025-06-03 15:49:26.361251 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.30s 2025-06-03 15:49:26.361257 | orchestrator | 2025-06-03 15:49:26.361264 | orchestrator | 2025-06-03 15:49:26.361270 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-03 15:49:26.361277 | orchestrator | 2025-06-03 15:49:26.361283 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-03 15:49:26.361290 | orchestrator | Tuesday 03 June 2025 15:44:35 +0000 (0:00:00.392) 0:00:00.392 ********** 2025-06-03 15:49:26.361296 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:49:26.361303 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:49:26.361309 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:49:26.361392 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:49:26.361400 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:49:26.361407 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:49:26.361414 | orchestrator | 2025-06-03 15:49:26.361421 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-03 15:49:26.361439 | orchestrator | Tuesday 03 June 2025 15:44:36 +0000 (0:00:00.547) 0:00:00.940 ********** 2025-06-03 15:49:26.361458 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2025-06-03 15:49:26.361472 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2025-06-03 15:49:26.361485 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2025-06-03 15:49:26.361496 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2025-06-03 15:49:26.361509 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2025-06-03 15:49:26.361520 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2025-06-03 15:49:26.361530 | orchestrator | 2025-06-03 15:49:26.361543 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2025-06-03 15:49:26.361555 | orchestrator | 2025-06-03 15:49:26.361571 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-06-03 15:49:26.361583 | orchestrator | Tuesday 03 June 2025 15:44:36 +0000 (0:00:00.672) 0:00:01.612 ********** 2025-06-03 15:49:26.361596 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-03 15:49:26.361609 | orchestrator | 2025-06-03 15:49:26.361616 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2025-06-03 15:49:26.361622 | orchestrator | Tuesday 03 June 2025 15:44:37 +0000 (0:00:00.993) 0:00:02.606 ********** 2025-06-03 15:49:26.361628 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:49:26.361634 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:49:26.361643 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:49:26.361655 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:49:26.361668 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:49:26.361680 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:49:26.361686 | orchestrator | 2025-06-03 15:49:26.361693 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2025-06-03 15:49:26.361699 | orchestrator | Tuesday 03 June 2025 15:44:39 +0000 (0:00:01.208) 0:00:03.814 ********** 2025-06-03 15:49:26.361706 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:49:26.361768 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:49:26.361776 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:49:26.361784 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:49:26.361793 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:49:26.361818 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:49:26.361824 | orchestrator | 2025-06-03 15:49:26.361831 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2025-06-03 15:49:26.361837 | orchestrator | Tuesday 03 June 2025 15:44:40 +0000 (0:00:01.051) 0:00:04.865 ********** 2025-06-03 15:49:26.361843 | orchestrator | ok: [testbed-node-0] => { 2025-06-03 15:49:26.361851 | orchestrator |  "changed": false, 2025-06-03 15:49:26.361857 | orchestrator |  "msg": "All assertions passed" 2025-06-03 15:49:26.361864 | orchestrator | } 2025-06-03 15:49:26.361870 | orchestrator | ok: [testbed-node-1] => { 2025-06-03 15:49:26.361876 | orchestrator |  "changed": false, 2025-06-03 15:49:26.361883 | orchestrator |  "msg": "All assertions passed" 2025-06-03 15:49:26.361888 | orchestrator | } 2025-06-03 15:49:26.361895 | orchestrator | ok: [testbed-node-2] => { 2025-06-03 15:49:26.361900 | orchestrator |  "changed": false, 2025-06-03 15:49:26.361907 | orchestrator |  "msg": "All assertions passed" 2025-06-03 15:49:26.361913 | orchestrator | } 2025-06-03 15:49:26.361919 | orchestrator | ok: [testbed-node-3] => { 2025-06-03 15:49:26.361925 | orchestrator |  "changed": false, 2025-06-03 15:49:26.361931 | orchestrator |  "msg": "All assertions passed" 2025-06-03 15:49:26.361937 | orchestrator | } 2025-06-03 15:49:26.361943 | orchestrator | ok: [testbed-node-4] => { 2025-06-03 15:49:26.361958 | orchestrator |  "changed": false, 2025-06-03 15:49:26.361963 | orchestrator |  "msg": "All assertions passed" 2025-06-03 15:49:26.361968 | orchestrator | } 2025-06-03 15:49:26.361974 | orchestrator | ok: [testbed-node-5] => { 2025-06-03 15:49:26.361979 | orchestrator |  "changed": false, 2025-06-03 15:49:26.361985 | orchestrator |  "msg": "All assertions passed" 2025-06-03 15:49:26.361990 | orchestrator | } 2025-06-03 15:49:26.361997 | orchestrator | 2025-06-03 15:49:26.362002 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2025-06-03 15:49:26.362116 | orchestrator | Tuesday 03 June 2025 15:44:40 +0000 (0:00:00.669) 0:00:05.535 ********** 2025-06-03 15:49:26.362130 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:49:26.362137 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:49:26.362143 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:49:26.362148 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:49:26.362154 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:49:26.362159 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:49:26.362166 | orchestrator | 2025-06-03 15:49:26.362173 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2025-06-03 15:49:26.362179 | orchestrator | Tuesday 03 June 2025 15:44:41 +0000 (0:00:00.648) 0:00:06.184 ********** 2025-06-03 15:49:26.362186 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2025-06-03 15:49:26.362192 | orchestrator | 2025-06-03 15:49:26.362199 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2025-06-03 15:49:26.362206 | orchestrator | Tuesday 03 June 2025 15:44:45 +0000 (0:00:03.738) 0:00:09.922 ********** 2025-06-03 15:49:26.362220 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2025-06-03 15:49:26.362228 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2025-06-03 15:49:26.362234 | orchestrator | 2025-06-03 15:49:26.362241 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2025-06-03 15:49:26.362247 | orchestrator | Tuesday 03 June 2025 15:44:51 +0000 (0:00:06.742) 0:00:16.665 ********** 2025-06-03 15:49:26.362254 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-03 15:49:26.362260 | orchestrator | 2025-06-03 15:49:26.362266 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2025-06-03 15:49:26.362273 | orchestrator | Tuesday 03 June 2025 15:44:55 +0000 (0:00:03.183) 0:00:19.849 ********** 2025-06-03 15:49:26.362279 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-03 15:49:26.362286 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2025-06-03 15:49:26.362292 | orchestrator | 2025-06-03 15:49:26.362298 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2025-06-03 15:49:26.362304 | orchestrator | Tuesday 03 June 2025 15:44:59 +0000 (0:00:04.004) 0:00:23.853 ********** 2025-06-03 15:49:26.362310 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-03 15:49:26.362317 | orchestrator | 2025-06-03 15:49:26.362322 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2025-06-03 15:49:26.362328 | orchestrator | Tuesday 03 June 2025 15:45:01 +0000 (0:00:02.901) 0:00:26.755 ********** 2025-06-03 15:49:26.362334 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2025-06-03 15:49:26.362340 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2025-06-03 15:49:26.362346 | orchestrator | 2025-06-03 15:49:26.362353 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-06-03 15:49:26.362359 | orchestrator | Tuesday 03 June 2025 15:45:09 +0000 (0:00:07.545) 0:00:34.301 ********** 2025-06-03 15:49:26.362365 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:49:26.362372 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:49:26.362378 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:49:26.362384 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:49:26.362390 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:49:26.362397 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:49:26.362411 | orchestrator | 2025-06-03 15:49:26.362418 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2025-06-03 15:49:26.362424 | orchestrator | Tuesday 03 June 2025 15:45:10 +0000 (0:00:00.779) 0:00:35.080 ********** 2025-06-03 15:49:26.362431 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:49:26.362437 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:49:26.362444 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:49:26.362450 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:49:26.362474 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:49:26.362481 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:49:26.362488 | orchestrator | 2025-06-03 15:49:26.362494 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2025-06-03 15:49:26.362500 | orchestrator | Tuesday 03 June 2025 15:45:12 +0000 (0:00:01.914) 0:00:36.994 ********** 2025-06-03 15:49:26.362507 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:49:26.362513 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:49:26.362520 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:49:26.362526 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:49:26.362533 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:49:26.362552 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:49:26.362559 | orchestrator | 2025-06-03 15:49:26.362566 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-06-03 15:49:26.362573 | orchestrator | Tuesday 03 June 2025 15:45:13 +0000 (0:00:01.163) 0:00:38.158 ********** 2025-06-03 15:49:26.362580 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:49:26.362587 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:49:26.362593 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:49:26.362600 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:49:26.362608 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:49:26.362615 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:49:26.362622 | orchestrator | 2025-06-03 15:49:26.362628 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2025-06-03 15:49:26.362634 | orchestrator | Tuesday 03 June 2025 15:45:15 +0000 (0:00:02.342) 0:00:40.501 ********** 2025-06-03 15:49:26.362644 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-03 15:49:26.362662 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-03 15:49:26.362669 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-03 15:49:26.362686 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-03 15:49:26.362701 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-03 15:49:26.362709 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-03 15:49:26.362716 | orchestrator | 2025-06-03 15:49:26.362722 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2025-06-03 15:49:26.362729 | orchestrator | Tuesday 03 June 2025 15:45:18 +0000 (0:00:03.192) 0:00:43.693 ********** 2025-06-03 15:49:26.362737 | orchestrator | [WARNING]: Skipped 2025-06-03 15:49:26.362745 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2025-06-03 15:49:26.362753 | orchestrator | due to this access issue: 2025-06-03 15:49:26.362766 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2025-06-03 15:49:26.362773 | orchestrator | a directory 2025-06-03 15:49:26.362781 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-03 15:49:26.362787 | orchestrator | 2025-06-03 15:49:26.362794 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-06-03 15:49:26.362800 | orchestrator | Tuesday 03 June 2025 15:45:19 +0000 (0:00:00.768) 0:00:44.462 ********** 2025-06-03 15:49:26.362816 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-03 15:49:26.362825 | orchestrator | 2025-06-03 15:49:26.362832 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2025-06-03 15:49:26.362837 | orchestrator | Tuesday 03 June 2025 15:45:20 +0000 (0:00:01.117) 0:00:45.580 ********** 2025-06-03 15:49:26.362844 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-03 15:49:26.362870 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-03 15:49:26.362877 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-03 15:49:26.362883 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-03 15:49:26.362894 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-03 15:49:26.362915 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-03 15:49:26.362922 | orchestrator | 2025-06-03 15:49:26.362928 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2025-06-03 15:49:26.362934 | orchestrator | Tuesday 03 June 2025 15:45:24 +0000 (0:00:03.331) 0:00:48.911 ********** 2025-06-03 15:49:26.362956 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-03 15:49:26.362963 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:49:26.362970 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-03 15:49:26.362977 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:49:26.362988 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-03 15:49:26.363000 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:49:26.363008 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-03 15:49:26.363015 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:49:26.363021 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-03 15:49:26.363028 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:49:26.363061 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-03 15:49:26.363069 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:49:26.363076 | orchestrator | 2025-06-03 15:49:26.363082 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2025-06-03 15:49:26.363089 | orchestrator | Tuesday 03 June 2025 15:45:26 +0000 (0:00:02.476) 0:00:51.387 ********** 2025-06-03 15:49:26.363096 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-03 15:49:26.363108 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:49:26.363119 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-03 15:49:26.363126 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:49:26.363133 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-03 15:49:26.363139 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:49:26.363146 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-03 15:49:26.363153 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:49:26.363165 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-03 15:49:26.363171 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:49:26.363178 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-03 15:49:26.363189 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:49:26.363196 | orchestrator | 2025-06-03 15:49:26.363202 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2025-06-03 15:49:26.363209 | orchestrator | Tuesday 03 June 2025 15:45:30 +0000 (0:00:03.604) 0:00:54.992 ********** 2025-06-03 15:49:26.363216 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:49:26.363223 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:49:26.363233 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:49:26.363240 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:49:26.363247 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:49:26.363253 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:49:26.363258 | orchestrator | 2025-06-03 15:49:26.363264 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2025-06-03 15:49:26.363270 | orchestrator | Tuesday 03 June 2025 15:45:32 +0000 (0:00:02.799) 0:00:57.792 ********** 2025-06-03 15:49:26.363276 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:49:26.363282 | orchestrator | 2025-06-03 15:49:26.363287 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2025-06-03 15:49:26.363293 | orchestrator | Tuesday 03 June 2025 15:45:33 +0000 (0:00:00.135) 0:00:57.927 ********** 2025-06-03 15:49:26.363298 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:49:26.363327 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:49:26.363333 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:49:26.363339 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:49:26.363345 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:49:26.363351 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:49:26.363356 | orchestrator | 2025-06-03 15:49:26.363362 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2025-06-03 15:49:26.363367 | orchestrator | Tuesday 03 June 2025 15:45:33 +0000 (0:00:00.669) 0:00:58.597 ********** 2025-06-03 15:49:26.363373 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-03 15:49:26.363379 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:49:26.363403 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-03 15:49:26.363415 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:49:26.363422 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-03 15:49:26.363428 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:49:26.363438 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-03 15:49:26.363445 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:49:26.363451 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-03 15:49:26.363457 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:49:26.363463 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-03 15:49:26.363469 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:49:26.363476 | orchestrator | 2025-06-03 15:49:26.363482 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2025-06-03 15:49:26.363494 | orchestrator | Tuesday 03 June 2025 15:45:36 +0000 (0:00:02.767) 0:01:01.364 ********** 2025-06-03 15:49:26.363515 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-03 15:49:26.363523 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-03 15:49:26.363532 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-03 15:49:26.363538 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-03 15:49:26.363544 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-03 15:49:26.363560 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-03 15:49:26.363566 | orchestrator | 2025-06-03 15:49:26.363571 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2025-06-03 15:49:26.363577 | orchestrator | Tuesday 03 June 2025 15:45:41 +0000 (0:00:04.582) 0:01:05.946 ********** 2025-06-03 15:49:26.363582 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-03 15:49:26.363604 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-03 15:49:26.363611 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-03 15:49:26.363618 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-03 15:49:26.363634 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-03 15:49:26.363641 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-03 15:49:26.363647 | orchestrator | 2025-06-03 15:49:26.363652 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2025-06-03 15:49:26.363659 | orchestrator | Tuesday 03 June 2025 15:45:48 +0000 (0:00:07.361) 0:01:13.308 ********** 2025-06-03 15:49:26.363668 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-03 15:49:26.363674 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:49:26.363681 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-03 15:49:26.363699 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-03 15:49:26.363705 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:49:26.363711 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-03 15:49:26.363717 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:49:26.363725 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-03 15:49:26.363734 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-03 15:49:26.363739 | orchestrator | 2025-06-03 15:49:26.363745 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2025-06-03 15:49:26.363751 | orchestrator | Tuesday 03 June 2025 15:45:51 +0000 (0:00:03.379) 0:01:16.687 ********** 2025-06-03 15:49:26.363757 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:49:26.363770 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:49:26.363776 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:49:26.363783 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:49:26.363789 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:49:26.363795 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:49:26.363800 | orchestrator | 2025-06-03 15:49:26.363806 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2025-06-03 15:49:26.363812 | orchestrator | Tuesday 03 June 2025 15:45:54 +0000 (0:00:02.801) 0:01:19.489 ********** 2025-06-03 15:49:26.363818 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-03 15:49:26.363824 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:49:26.363837 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-03 15:49:26.363844 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:49:26.363850 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-03 15:49:26.363858 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:49:26.363867 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-03 15:49:26.363875 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-03 15:49:26.363890 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-03 15:49:26.363897 | orchestrator | 2025-06-03 15:49:26.363904 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2025-06-03 15:49:26.363908 | orchestrator | Tuesday 03 June 2025 15:45:58 +0000 (0:00:03.664) 0:01:23.153 ********** 2025-06-03 15:49:26.363912 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:49:26.363916 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:49:26.363920 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:49:26.363924 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:49:26.363927 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:49:26.363931 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:49:26.363935 | orchestrator | 2025-06-03 15:49:26.363938 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2025-06-03 15:49:26.363942 | orchestrator | Tuesday 03 June 2025 15:46:00 +0000 (0:00:02.081) 0:01:25.235 ********** 2025-06-03 15:49:26.363946 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:49:26.363950 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:49:26.363953 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:49:26.363957 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:49:26.363961 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:49:26.363964 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:49:26.363968 | orchestrator | 2025-06-03 15:49:26.363972 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2025-06-03 15:49:26.363976 | orchestrator | Tuesday 03 June 2025 15:46:03 +0000 (0:00:03.281) 0:01:28.516 ********** 2025-06-03 15:49:26.363980 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:49:26.363983 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:49:26.363987 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:49:26.363991 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:49:26.363994 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:49:26.363998 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:49:26.364002 | orchestrator | 2025-06-03 15:49:26.364005 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2025-06-03 15:49:26.364009 | orchestrator | Tuesday 03 June 2025 15:46:06 +0000 (0:00:03.040) 0:01:31.557 ********** 2025-06-03 15:49:26.364013 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:49:26.364017 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:49:26.364024 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:49:26.364028 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:49:26.364058 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:49:26.364065 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:49:26.364072 | orchestrator | 2025-06-03 15:49:26.364079 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2025-06-03 15:49:26.364088 | orchestrator | Tuesday 03 June 2025 15:46:09 +0000 (0:00:02.257) 0:01:33.815 ********** 2025-06-03 15:49:26.364094 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:49:26.364101 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:49:26.364107 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:49:26.364111 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:49:26.364115 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:49:26.364119 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:49:26.364122 | orchestrator | 2025-06-03 15:49:26.364126 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2025-06-03 15:49:26.364130 | orchestrator | Tuesday 03 June 2025 15:46:11 +0000 (0:00:02.182) 0:01:35.998 ********** 2025-06-03 15:49:26.364134 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:49:26.364138 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:49:26.364141 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:49:26.364145 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:49:26.364149 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:49:26.364152 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:49:26.364158 | orchestrator | 2025-06-03 15:49:26.364164 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2025-06-03 15:49:26.364170 | orchestrator | Tuesday 03 June 2025 15:46:13 +0000 (0:00:02.305) 0:01:38.303 ********** 2025-06-03 15:49:26.364176 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-06-03 15:49:26.364182 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:49:26.364188 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-06-03 15:49:26.364193 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:49:26.364199 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-06-03 15:49:26.364205 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:49:26.364211 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-06-03 15:49:26.364217 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:49:26.364223 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-06-03 15:49:26.364230 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:49:26.364237 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-06-03 15:49:26.364243 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:49:26.364250 | orchestrator | 2025-06-03 15:49:26.364257 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2025-06-03 15:49:26.364261 | orchestrator | Tuesday 03 June 2025 15:46:15 +0000 (0:00:02.120) 0:01:40.424 ********** 2025-06-03 15:49:26.364272 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-03 15:49:26.364285 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:49:26.364292 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-03 15:49:26.364298 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:49:26.364308 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-03 15:49:26.364315 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:49:26.364321 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-03 15:49:26.364328 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:49:26.364334 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-03 15:49:26.364340 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:49:26.364352 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-03 15:49:26.364360 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:49:26.364366 | orchestrator | 2025-06-03 15:49:26.364373 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2025-06-03 15:49:26.364379 | orchestrator | Tuesday 03 June 2025 15:46:18 +0000 (0:00:03.252) 0:01:43.676 ********** 2025-06-03 15:49:26.364385 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-03 15:49:26.364392 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:49:26.364402 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-03 15:49:26.364409 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:49:26.364415 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-03 15:49:26.364422 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:49:26.364433 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-03 15:49:26.364445 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:49:26.364451 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-03 15:49:26.364456 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:49:26.364462 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-03 15:49:26.364469 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:49:26.364474 | orchestrator | 2025-06-03 15:49:26.364484 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2025-06-03 15:49:26.364490 | orchestrator | Tuesday 03 June 2025 15:46:21 +0000 (0:00:02.472) 0:01:46.148 ********** 2025-06-03 15:49:26.364496 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:49:26.364502 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:49:26.364509 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:49:26.364515 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:49:26.364521 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:49:26.364527 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:49:26.364533 | orchestrator | 2025-06-03 15:49:26.364539 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2025-06-03 15:49:26.364545 | orchestrator | Tuesday 03 June 2025 15:46:24 +0000 (0:00:03.326) 0:01:49.475 ********** 2025-06-03 15:49:26.364552 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:49:26.364558 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:49:26.364564 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:49:26.364570 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:49:26.364576 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:49:26.364582 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:49:26.364588 | orchestrator | 2025-06-03 15:49:26.364595 | orchestrator | TASK [neutron : Copying over neutron_ovn_vpn_agent.ini] ************************ 2025-06-03 15:49:26.364601 | orchestrator | Tuesday 03 June 2025 15:46:28 +0000 (0:00:03.791) 0:01:53.266 ********** 2025-06-03 15:49:26.364607 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:49:26.364614 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:49:26.364620 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:49:26.364626 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:49:26.364632 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:49:26.364645 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:49:26.364651 | orchestrator | 2025-06-03 15:49:26.364657 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2025-06-03 15:49:26.364663 | orchestrator | Tuesday 03 June 2025 15:46:31 +0000 (0:00:02.584) 0:01:55.851 ********** 2025-06-03 15:49:26.364669 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:49:26.364675 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:49:26.364681 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:49:26.364687 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:49:26.364693 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:49:26.364700 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:49:26.364706 | orchestrator | 2025-06-03 15:49:26.364712 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2025-06-03 15:49:26.364718 | orchestrator | Tuesday 03 June 2025 15:46:33 +0000 (0:00:02.186) 0:01:58.037 ********** 2025-06-03 15:49:26.364724 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:49:26.364730 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:49:26.364736 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:49:26.364742 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:49:26.364748 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:49:26.364754 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:49:26.364760 | orchestrator | 2025-06-03 15:49:26.364766 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2025-06-03 15:49:26.364772 | orchestrator | Tuesday 03 June 2025 15:46:36 +0000 (0:00:03.158) 0:02:01.196 ********** 2025-06-03 15:49:26.364779 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:49:26.364785 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:49:26.364791 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:49:26.364799 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:49:26.364806 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:49:26.364812 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:49:26.364818 | orchestrator | 2025-06-03 15:49:26.364824 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2025-06-03 15:49:26.364834 | orchestrator | Tuesday 03 June 2025 15:46:38 +0000 (0:00:02.235) 0:02:03.431 ********** 2025-06-03 15:49:26.364841 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:49:26.364847 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:49:26.364853 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:49:26.364860 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:49:26.364866 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:49:26.364872 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:49:26.364878 | orchestrator | 2025-06-03 15:49:26.364884 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2025-06-03 15:49:26.364890 | orchestrator | Tuesday 03 June 2025 15:46:43 +0000 (0:00:04.683) 0:02:08.114 ********** 2025-06-03 15:49:26.364896 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:49:26.364902 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:49:26.364908 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:49:26.364915 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:49:26.364921 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:49:26.364926 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:49:26.364932 | orchestrator | 2025-06-03 15:49:26.364937 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2025-06-03 15:49:26.364943 | orchestrator | Tuesday 03 June 2025 15:46:46 +0000 (0:00:03.199) 0:02:11.314 ********** 2025-06-03 15:49:26.364949 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:49:26.364955 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:49:26.364962 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:49:26.364968 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:49:26.364974 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:49:26.364980 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:49:26.364986 | orchestrator | 2025-06-03 15:49:26.364992 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2025-06-03 15:49:26.365004 | orchestrator | Tuesday 03 June 2025 15:46:49 +0000 (0:00:02.676) 0:02:13.990 ********** 2025-06-03 15:49:26.365010 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:49:26.365016 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:49:26.365022 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:49:26.365028 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:49:26.365053 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:49:26.365059 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:49:26.365066 | orchestrator | 2025-06-03 15:49:26.365072 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2025-06-03 15:49:26.365078 | orchestrator | Tuesday 03 June 2025 15:46:52 +0000 (0:00:02.842) 0:02:16.832 ********** 2025-06-03 15:49:26.365091 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-06-03 15:49:26.365097 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:49:26.365103 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-06-03 15:49:26.365109 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:49:26.365116 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-06-03 15:49:26.365122 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:49:26.365127 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-06-03 15:49:26.365134 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:49:26.365139 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-06-03 15:49:26.365145 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:49:26.365152 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-06-03 15:49:26.365158 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:49:26.365164 | orchestrator | 2025-06-03 15:49:26.365171 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2025-06-03 15:49:26.365177 | orchestrator | Tuesday 03 June 2025 15:46:55 +0000 (0:00:03.026) 0:02:19.859 ********** 2025-06-03 15:49:26.365184 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-03 15:49:26.365190 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:49:26.365201 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-03 15:49:26.365208 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:49:26.365222 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-03 15:49:26.365229 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:49:26.365238 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-03 15:49:26.365245 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:49:26.365252 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-03 15:49:26.365258 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:49:26.365264 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-03 15:49:26.365271 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:49:26.365277 | orchestrator | 2025-06-03 15:49:26.365283 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2025-06-03 15:49:26.365289 | orchestrator | Tuesday 03 June 2025 15:46:58 +0000 (0:00:03.659) 0:02:23.518 ********** 2025-06-03 15:49:26.365300 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-03 15:49:26.365311 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-03 15:49:26.365322 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-03 15:49:26.365329 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-03 15:49:26.365335 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-03 15:49:26.365447 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-03 15:49:26.365458 | orchestrator | 2025-06-03 15:49:26.365465 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-06-03 15:49:26.365471 | orchestrator | Tuesday 03 June 2025 15:47:01 +0000 (0:00:03.078) 0:02:26.596 ********** 2025-06-03 15:49:26.365477 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:49:26.365483 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:49:26.365489 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:49:26.365495 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:49:26.365502 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:49:26.365508 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:49:26.365514 | orchestrator | 2025-06-03 15:49:26.365520 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2025-06-03 15:49:26.365526 | orchestrator | Tuesday 03 June 2025 15:47:02 +0000 (0:00:00.489) 0:02:27.086 ********** 2025-06-03 15:49:26.365532 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:49:26.365539 | orchestrator | 2025-06-03 15:49:26.365545 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2025-06-03 15:49:26.365551 | orchestrator | Tuesday 03 June 2025 15:47:04 +0000 (0:00:02.325) 0:02:29.412 ********** 2025-06-03 15:49:26.365557 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:49:26.365563 | orchestrator | 2025-06-03 15:49:26.365569 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2025-06-03 15:49:26.365576 | orchestrator | Tuesday 03 June 2025 15:47:06 +0000 (0:00:02.157) 0:02:31.570 ********** 2025-06-03 15:49:26.365582 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:49:26.365588 | orchestrator | 2025-06-03 15:49:26.365594 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-06-03 15:49:26.365605 | orchestrator | Tuesday 03 June 2025 15:47:51 +0000 (0:00:45.033) 0:03:16.604 ********** 2025-06-03 15:49:26.365611 | orchestrator | 2025-06-03 15:49:26.365617 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-06-03 15:49:26.365623 | orchestrator | Tuesday 03 June 2025 15:47:51 +0000 (0:00:00.069) 0:03:16.674 ********** 2025-06-03 15:49:26.365629 | orchestrator | 2025-06-03 15:49:26.365636 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-06-03 15:49:26.365642 | orchestrator | Tuesday 03 June 2025 15:47:52 +0000 (0:00:00.271) 0:03:16.945 ********** 2025-06-03 15:49:26.365648 | orchestrator | 2025-06-03 15:49:26.365654 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-06-03 15:49:26.365661 | orchestrator | Tuesday 03 June 2025 15:47:52 +0000 (0:00:00.069) 0:03:17.015 ********** 2025-06-03 15:49:26.365667 | orchestrator | 2025-06-03 15:49:26.365673 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-06-03 15:49:26.365679 | orchestrator | Tuesday 03 June 2025 15:47:52 +0000 (0:00:00.070) 0:03:17.085 ********** 2025-06-03 15:49:26.365686 | orchestrator | 2025-06-03 15:49:26.365692 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-06-03 15:49:26.365698 | orchestrator | Tuesday 03 June 2025 15:47:52 +0000 (0:00:00.066) 0:03:17.152 ********** 2025-06-03 15:49:26.365704 | orchestrator | 2025-06-03 15:49:26.365710 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2025-06-03 15:49:26.365715 | orchestrator | Tuesday 03 June 2025 15:47:52 +0000 (0:00:00.065) 0:03:17.217 ********** 2025-06-03 15:49:26.365727 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:49:26.365733 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:49:26.365739 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:49:26.365746 | orchestrator | 2025-06-03 15:49:26.365752 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2025-06-03 15:49:26.365758 | orchestrator | Tuesday 03 June 2025 15:48:20 +0000 (0:00:28.113) 0:03:45.331 ********** 2025-06-03 15:49:26.365764 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:49:26.365771 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:49:26.365777 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:49:26.365782 | orchestrator | 2025-06-03 15:49:26.365788 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-03 15:49:26.365795 | orchestrator | testbed-node-0 : ok=27  changed=16  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-06-03 15:49:26.365803 | orchestrator | testbed-node-1 : ok=17  changed=9  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-06-03 15:49:26.365809 | orchestrator | testbed-node-2 : ok=17  changed=9  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-06-03 15:49:26.365816 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-06-03 15:49:26.365822 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-06-03 15:49:26.365832 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-06-03 15:49:26.365838 | orchestrator | 2025-06-03 15:49:26.365844 | orchestrator | 2025-06-03 15:49:26.365850 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-03 15:49:26.365857 | orchestrator | Tuesday 03 June 2025 15:49:24 +0000 (0:01:04.139) 0:04:49.470 ********** 2025-06-03 15:49:26.365863 | orchestrator | =============================================================================== 2025-06-03 15:49:26.365869 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 64.14s 2025-06-03 15:49:26.365875 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 45.03s 2025-06-03 15:49:26.365881 | orchestrator | neutron : Restart neutron-server container ----------------------------- 28.11s 2025-06-03 15:49:26.365887 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 7.55s 2025-06-03 15:49:26.365893 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 7.36s 2025-06-03 15:49:26.365899 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 6.74s 2025-06-03 15:49:26.365906 | orchestrator | neutron : Copying over ovn_agent.ini ------------------------------------ 4.68s 2025-06-03 15:49:26.365912 | orchestrator | neutron : Copying over config.json files for services ------------------- 4.58s 2025-06-03 15:49:26.365919 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 4.00s 2025-06-03 15:49:26.365925 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 3.79s 2025-06-03 15:49:26.365931 | orchestrator | service-ks-register : neutron | Creating services ----------------------- 3.74s 2025-06-03 15:49:26.365937 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 3.66s 2025-06-03 15:49:26.365943 | orchestrator | neutron : Copying over neutron_taas.conf -------------------------------- 3.66s 2025-06-03 15:49:26.365950 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS key ----- 3.60s 2025-06-03 15:49:26.365956 | orchestrator | neutron : Copying over neutron_vpnaas.conf ------------------------------ 3.38s 2025-06-03 15:49:26.365962 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 3.33s 2025-06-03 15:49:26.365973 | orchestrator | neutron : Copying over metadata_agent.ini ------------------------------- 3.33s 2025-06-03 15:49:26.365983 | orchestrator | neutron : Copying over openvswitch_agent.ini ---------------------------- 3.28s 2025-06-03 15:49:26.365989 | orchestrator | neutron : Copying over l3_agent.ini ------------------------------------- 3.25s 2025-06-03 15:49:26.365995 | orchestrator | neutron : Copying over nsx.ini ------------------------------------------ 3.20s 2025-06-03 15:49:26.366001 | orchestrator | 2025-06-03 15:49:26 | INFO  | Task 06bf7594-82cc-4f39-a568-16db6170ae64 is in state STARTED 2025-06-03 15:49:26.366007 | orchestrator | 2025-06-03 15:49:26 | INFO  | Task 02e7aabf-85e7-442f-8fe0-20c046ed8188 is in state STARTED 2025-06-03 15:49:26.366053 | orchestrator | 2025-06-03 15:49:26 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:49:29.418219 | orchestrator | 2025-06-03 15:49:29 | INFO  | Task b09c2362-48b4-42a1-9ec5-64485050a7df is in state STARTED 2025-06-03 15:49:29.419820 | orchestrator | 2025-06-03 15:49:29 | INFO  | Task 88f15d3f-7f29-447d-a26c-c055d2bc5000 is in state STARTED 2025-06-03 15:49:29.421840 | orchestrator | 2025-06-03 15:49:29 | INFO  | Task 06bf7594-82cc-4f39-a568-16db6170ae64 is in state STARTED 2025-06-03 15:49:29.425213 | orchestrator | 2025-06-03 15:49:29 | INFO  | Task 02e7aabf-85e7-442f-8fe0-20c046ed8188 is in state STARTED 2025-06-03 15:49:29.425256 | orchestrator | 2025-06-03 15:49:29 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:49:32.459324 | orchestrator | 2025-06-03 15:49:32 | INFO  | Task b09c2362-48b4-42a1-9ec5-64485050a7df is in state STARTED 2025-06-03 15:49:32.460199 | orchestrator | 2025-06-03 15:49:32 | INFO  | Task 88f15d3f-7f29-447d-a26c-c055d2bc5000 is in state STARTED 2025-06-03 15:49:32.460616 | orchestrator | 2025-06-03 15:49:32 | INFO  | Task 06bf7594-82cc-4f39-a568-16db6170ae64 is in state STARTED 2025-06-03 15:49:32.461624 | orchestrator | 2025-06-03 15:49:32 | INFO  | Task 02e7aabf-85e7-442f-8fe0-20c046ed8188 is in state STARTED 2025-06-03 15:49:32.464418 | orchestrator | 2025-06-03 15:49:32 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:49:35.501902 | orchestrator | 2025-06-03 15:49:35 | INFO  | Task b09c2362-48b4-42a1-9ec5-64485050a7df is in state STARTED 2025-06-03 15:49:35.503428 | orchestrator | 2025-06-03 15:49:35 | INFO  | Task 88f15d3f-7f29-447d-a26c-c055d2bc5000 is in state STARTED 2025-06-03 15:49:35.504203 | orchestrator | 2025-06-03 15:49:35 | INFO  | Task 06bf7594-82cc-4f39-a568-16db6170ae64 is in state STARTED 2025-06-03 15:49:35.505986 | orchestrator | 2025-06-03 15:49:35 | INFO  | Task 02e7aabf-85e7-442f-8fe0-20c046ed8188 is in state STARTED 2025-06-03 15:49:35.506091 | orchestrator | 2025-06-03 15:49:35 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:49:38.542416 | orchestrator | 2025-06-03 15:49:38 | INFO  | Task b09c2362-48b4-42a1-9ec5-64485050a7df is in state STARTED 2025-06-03 15:49:38.543642 | orchestrator | 2025-06-03 15:49:38 | INFO  | Task 88f15d3f-7f29-447d-a26c-c055d2bc5000 is in state STARTED 2025-06-03 15:49:38.544980 | orchestrator | 2025-06-03 15:49:38 | INFO  | Task 06bf7594-82cc-4f39-a568-16db6170ae64 is in state STARTED 2025-06-03 15:49:38.545495 | orchestrator | 2025-06-03 15:49:38 | INFO  | Task 02e7aabf-85e7-442f-8fe0-20c046ed8188 is in state STARTED 2025-06-03 15:49:38.545514 | orchestrator | 2025-06-03 15:49:38 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:49:41.578949 | orchestrator | 2025-06-03 15:49:41 | INFO  | Task b09c2362-48b4-42a1-9ec5-64485050a7df is in state STARTED 2025-06-03 15:49:41.581579 | orchestrator | 2025-06-03 15:49:41 | INFO  | Task 88f15d3f-7f29-447d-a26c-c055d2bc5000 is in state STARTED 2025-06-03 15:49:41.583110 | orchestrator | 2025-06-03 15:49:41 | INFO  | Task 06bf7594-82cc-4f39-a568-16db6170ae64 is in state STARTED 2025-06-03 15:49:41.584958 | orchestrator | 2025-06-03 15:49:41 | INFO  | Task 02e7aabf-85e7-442f-8fe0-20c046ed8188 is in state STARTED 2025-06-03 15:49:41.584984 | orchestrator | 2025-06-03 15:49:41 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:49:44.608495 | orchestrator | 2025-06-03 15:49:44 | INFO  | Task b09c2362-48b4-42a1-9ec5-64485050a7df is in state STARTED 2025-06-03 15:49:44.608565 | orchestrator | 2025-06-03 15:49:44 | INFO  | Task 88f15d3f-7f29-447d-a26c-c055d2bc5000 is in state STARTED 2025-06-03 15:49:44.609247 | orchestrator | 2025-06-03 15:49:44 | INFO  | Task 06bf7594-82cc-4f39-a568-16db6170ae64 is in state STARTED 2025-06-03 15:49:44.609901 | orchestrator | 2025-06-03 15:49:44 | INFO  | Task 02e7aabf-85e7-442f-8fe0-20c046ed8188 is in state STARTED 2025-06-03 15:49:44.609923 | orchestrator | 2025-06-03 15:49:44 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:49:47.642539 | orchestrator | 2025-06-03 15:49:47 | INFO  | Task b09c2362-48b4-42a1-9ec5-64485050a7df is in state STARTED 2025-06-03 15:49:47.644146 | orchestrator | 2025-06-03 15:49:47 | INFO  | Task 88f15d3f-7f29-447d-a26c-c055d2bc5000 is in state STARTED 2025-06-03 15:49:47.645437 | orchestrator | 2025-06-03 15:49:47 | INFO  | Task 06bf7594-82cc-4f39-a568-16db6170ae64 is in state STARTED 2025-06-03 15:49:47.647430 | orchestrator | 2025-06-03 15:49:47 | INFO  | Task 02e7aabf-85e7-442f-8fe0-20c046ed8188 is in state STARTED 2025-06-03 15:49:47.648104 | orchestrator | 2025-06-03 15:49:47 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:49:50.684716 | orchestrator | 2025-06-03 15:49:50 | INFO  | Task d95376f6-aabb-48d0-8cff-676bc02c4743 is in state STARTED 2025-06-03 15:49:50.684882 | orchestrator | 2025-06-03 15:49:50 | INFO  | Task b09c2362-48b4-42a1-9ec5-64485050a7df is in state STARTED 2025-06-03 15:49:50.684897 | orchestrator | 2025-06-03 15:49:50 | INFO  | Task 88f15d3f-7f29-447d-a26c-c055d2bc5000 is in state STARTED 2025-06-03 15:49:50.685417 | orchestrator | 2025-06-03 15:49:50 | INFO  | Task 06bf7594-82cc-4f39-a568-16db6170ae64 is in state STARTED 2025-06-03 15:49:50.687751 | orchestrator | 2025-06-03 15:49:50 | INFO  | Task 02e7aabf-85e7-442f-8fe0-20c046ed8188 is in state SUCCESS 2025-06-03 15:49:50.688682 | orchestrator | 2025-06-03 15:49:50.688707 | orchestrator | 2025-06-03 15:49:50.688713 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-03 15:49:50.688719 | orchestrator | 2025-06-03 15:49:50.688724 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-03 15:49:50.688730 | orchestrator | Tuesday 03 June 2025 15:47:48 +0000 (0:00:00.265) 0:00:00.265 ********** 2025-06-03 15:49:50.688735 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:49:50.688757 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:49:50.688762 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:49:50.688767 | orchestrator | 2025-06-03 15:49:50.688771 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-03 15:49:50.688776 | orchestrator | Tuesday 03 June 2025 15:47:48 +0000 (0:00:00.291) 0:00:00.556 ********** 2025-06-03 15:49:50.688781 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2025-06-03 15:49:50.688787 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2025-06-03 15:49:50.688791 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2025-06-03 15:49:50.688796 | orchestrator | 2025-06-03 15:49:50.688801 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2025-06-03 15:49:50.688805 | orchestrator | 2025-06-03 15:49:50.688810 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-06-03 15:49:50.688814 | orchestrator | Tuesday 03 June 2025 15:47:49 +0000 (0:00:00.493) 0:00:01.049 ********** 2025-06-03 15:49:50.688837 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:49:50.688843 | orchestrator | 2025-06-03 15:49:50.688848 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2025-06-03 15:49:50.688852 | orchestrator | Tuesday 03 June 2025 15:47:50 +0000 (0:00:00.669) 0:00:01.719 ********** 2025-06-03 15:49:50.688858 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2025-06-03 15:49:50.688877 | orchestrator | 2025-06-03 15:49:50.688882 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2025-06-03 15:49:50.688887 | orchestrator | Tuesday 03 June 2025 15:47:53 +0000 (0:00:03.441) 0:00:05.160 ********** 2025-06-03 15:49:50.688913 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2025-06-03 15:49:50.688918 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2025-06-03 15:49:50.688923 | orchestrator | 2025-06-03 15:49:50.688928 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2025-06-03 15:49:50.688932 | orchestrator | Tuesday 03 June 2025 15:47:59 +0000 (0:00:06.256) 0:00:11.417 ********** 2025-06-03 15:49:50.688937 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-03 15:49:50.688942 | orchestrator | 2025-06-03 15:49:50.688946 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2025-06-03 15:49:50.688951 | orchestrator | Tuesday 03 June 2025 15:48:03 +0000 (0:00:03.277) 0:00:14.695 ********** 2025-06-03 15:49:50.688956 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-03 15:49:50.688960 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2025-06-03 15:49:50.688965 | orchestrator | 2025-06-03 15:49:50.688970 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2025-06-03 15:49:50.688974 | orchestrator | Tuesday 03 June 2025 15:48:06 +0000 (0:00:03.794) 0:00:18.489 ********** 2025-06-03 15:49:50.689005 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-03 15:49:50.689026 | orchestrator | 2025-06-03 15:49:50.689030 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2025-06-03 15:49:50.689035 | orchestrator | Tuesday 03 June 2025 15:48:10 +0000 (0:00:03.194) 0:00:21.683 ********** 2025-06-03 15:49:50.689040 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2025-06-03 15:49:50.689044 | orchestrator | 2025-06-03 15:49:50.689049 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2025-06-03 15:49:50.689053 | orchestrator | Tuesday 03 June 2025 15:48:14 +0000 (0:00:04.016) 0:00:25.700 ********** 2025-06-03 15:49:50.689058 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:49:50.689062 | orchestrator | 2025-06-03 15:49:50.689067 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2025-06-03 15:49:50.689072 | orchestrator | Tuesday 03 June 2025 15:48:17 +0000 (0:00:03.295) 0:00:28.995 ********** 2025-06-03 15:49:50.689076 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:49:50.689081 | orchestrator | 2025-06-03 15:49:50.689096 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2025-06-03 15:49:50.689101 | orchestrator | Tuesday 03 June 2025 15:48:21 +0000 (0:00:04.102) 0:00:33.098 ********** 2025-06-03 15:49:50.689106 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:49:50.689111 | orchestrator | 2025-06-03 15:49:50.689131 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2025-06-03 15:49:50.689136 | orchestrator | Tuesday 03 June 2025 15:48:25 +0000 (0:00:03.856) 0:00:36.955 ********** 2025-06-03 15:49:50.689154 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-03 15:49:50.689168 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-03 15:49:50.689173 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-03 15:49:50.689179 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-03 15:49:50.689188 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-03 15:49:50.689197 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-03 15:49:50.689206 | orchestrator | 2025-06-03 15:49:50.689210 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2025-06-03 15:49:50.689215 | orchestrator | Tuesday 03 June 2025 15:48:28 +0000 (0:00:03.046) 0:00:40.001 ********** 2025-06-03 15:49:50.689219 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:49:50.689224 | orchestrator | 2025-06-03 15:49:50.689229 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2025-06-03 15:49:50.689233 | orchestrator | Tuesday 03 June 2025 15:48:28 +0000 (0:00:00.258) 0:00:40.259 ********** 2025-06-03 15:49:50.689238 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:49:50.689242 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:49:50.689247 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:49:50.689251 | orchestrator | 2025-06-03 15:49:50.689256 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2025-06-03 15:49:50.689261 | orchestrator | Tuesday 03 June 2025 15:48:29 +0000 (0:00:01.225) 0:00:41.485 ********** 2025-06-03 15:49:50.689265 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-03 15:49:50.689270 | orchestrator | 2025-06-03 15:49:50.689274 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2025-06-03 15:49:50.689279 | orchestrator | Tuesday 03 June 2025 15:48:32 +0000 (0:00:02.375) 0:00:43.861 ********** 2025-06-03 15:49:50.689284 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-03 15:49:50.689289 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-03 15:49:50.689297 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-03 15:49:50.689318 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-03 15:49:50.689325 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-03 15:49:50.689330 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-03 15:49:50.689335 | orchestrator | 2025-06-03 15:49:50.689340 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2025-06-03 15:49:50.689346 | orchestrator | Tuesday 03 June 2025 15:48:35 +0000 (0:00:03.553) 0:00:47.414 ********** 2025-06-03 15:49:50.689351 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:49:50.689356 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:49:50.689361 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:49:50.689366 | orchestrator | 2025-06-03 15:49:50.689372 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-06-03 15:49:50.689377 | orchestrator | Tuesday 03 June 2025 15:48:36 +0000 (0:00:00.434) 0:00:47.850 ********** 2025-06-03 15:49:50.689382 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:49:50.689388 | orchestrator | 2025-06-03 15:49:50.689393 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2025-06-03 15:49:50.689398 | orchestrator | Tuesday 03 June 2025 15:48:37 +0000 (0:00:01.415) 0:00:49.266 ********** 2025-06-03 15:49:50.689407 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-03 15:49:50.689419 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-03 15:49:50.689425 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-03 15:49:50.689430 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-03 15:49:50.689436 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-03 15:49:50.689448 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-03 15:49:50.689453 | orchestrator | 2025-06-03 15:49:50.689459 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2025-06-03 15:49:50.689464 | orchestrator | Tuesday 03 June 2025 15:48:40 +0000 (0:00:03.082) 0:00:52.348 ********** 2025-06-03 15:49:50.689474 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-03 15:49:50.689479 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-03 15:49:50.689484 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:49:50.689490 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-03 15:49:50.689496 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-03 15:49:50.689504 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:49:50.689513 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-03 15:49:50.689522 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-03 15:49:50.689528 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:49:50.689533 | orchestrator | 2025-06-03 15:49:50.689538 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2025-06-03 15:49:50.689543 | orchestrator | Tuesday 03 June 2025 15:48:41 +0000 (0:00:00.592) 0:00:52.941 ********** 2025-06-03 15:49:50.689548 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-03 15:49:50.689554 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-03 15:49:50.689563 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:49:50.689569 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-03 15:49:50.689577 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-03 15:49:50.689582 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:49:50.689592 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-03 15:49:50.689598 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-03 15:49:50.689603 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:49:50.689608 | orchestrator | 2025-06-03 15:49:50.689613 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2025-06-03 15:49:50.689619 | orchestrator | Tuesday 03 June 2025 15:48:42 +0000 (0:00:01.010) 0:00:53.951 ********** 2025-06-03 15:49:50.689624 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-03 15:49:50.689636 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-03 15:49:50.689752 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-03 15:49:50.689758 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-03 15:49:50.689763 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-03 15:49:50.689772 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-03 15:49:50.689777 | orchestrator | 2025-06-03 15:49:50.689781 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2025-06-03 15:49:50.689786 | orchestrator | Tuesday 03 June 2025 15:48:44 +0000 (0:00:02.292) 0:00:56.243 ********** 2025-06-03 15:49:50.689794 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-03 15:49:50.689802 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-03 15:49:50.689807 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-03 15:49:50.689812 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-03 15:49:50.689820 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-03 15:49:50.689827 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-03 15:49:50.689832 | orchestrator | 2025-06-03 15:49:50.689837 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2025-06-03 15:49:50.689841 | orchestrator | Tuesday 03 June 2025 15:48:49 +0000 (0:00:04.538) 0:01:00.781 ********** 2025-06-03 15:49:50.689849 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-03 15:49:50.689854 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-03 15:49:50.689859 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:49:50.689863 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-03 15:49:50.689873 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-03 15:49:50.689878 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:49:50.689886 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-03 15:49:50.689893 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-03 15:49:50.689898 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:49:50.689903 | orchestrator | 2025-06-03 15:49:50.689907 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2025-06-03 15:49:50.689912 | orchestrator | Tuesday 03 June 2025 15:48:49 +0000 (0:00:00.690) 0:01:01.472 ********** 2025-06-03 15:49:50.689917 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-03 15:49:50.689925 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-03 15:49:50.689933 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-03 15:49:50.689938 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-03 15:49:50.689945 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-03 15:49:50.689950 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-03 15:49:50.689958 | orchestrator | 2025-06-03 15:49:50.689962 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-06-03 15:49:50.689967 | orchestrator | Tuesday 03 June 2025 15:48:52 +0000 (0:00:02.462) 0:01:03.934 ********** 2025-06-03 15:49:50.689972 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:49:50.689976 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:49:50.689981 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:49:50.689985 | orchestrator | 2025-06-03 15:49:50.689990 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2025-06-03 15:49:50.689995 | orchestrator | Tuesday 03 June 2025 15:48:52 +0000 (0:00:00.271) 0:01:04.205 ********** 2025-06-03 15:49:50.689999 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:49:50.690004 | orchestrator | 2025-06-03 15:49:50.690092 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2025-06-03 15:49:50.690100 | orchestrator | Tuesday 03 June 2025 15:48:54 +0000 (0:00:02.171) 0:01:06.376 ********** 2025-06-03 15:49:50.690104 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:49:50.690109 | orchestrator | 2025-06-03 15:49:50.690114 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2025-06-03 15:49:50.690118 | orchestrator | Tuesday 03 June 2025 15:48:56 +0000 (0:00:02.144) 0:01:08.521 ********** 2025-06-03 15:49:50.690123 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:49:50.690128 | orchestrator | 2025-06-03 15:49:50.690132 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-06-03 15:49:50.690137 | orchestrator | Tuesday 03 June 2025 15:49:11 +0000 (0:00:14.541) 0:01:23.063 ********** 2025-06-03 15:49:50.690141 | orchestrator | 2025-06-03 15:49:50.690146 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-06-03 15:49:50.690151 | orchestrator | Tuesday 03 June 2025 15:49:11 +0000 (0:00:00.075) 0:01:23.138 ********** 2025-06-03 15:49:50.690155 | orchestrator | 2025-06-03 15:49:50.690160 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-06-03 15:49:50.690164 | orchestrator | Tuesday 03 June 2025 15:49:11 +0000 (0:00:00.062) 0:01:23.201 ********** 2025-06-03 15:49:50.690169 | orchestrator | 2025-06-03 15:49:50.690174 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2025-06-03 15:49:50.690178 | orchestrator | Tuesday 03 June 2025 15:49:11 +0000 (0:00:00.076) 0:01:23.278 ********** 2025-06-03 15:49:50.690183 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:49:50.690187 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:49:50.690192 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:49:50.690196 | orchestrator | 2025-06-03 15:49:50.690201 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2025-06-03 15:49:50.690205 | orchestrator | Tuesday 03 June 2025 15:49:31 +0000 (0:00:20.321) 0:01:43.600 ********** 2025-06-03 15:49:50.690214 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:49:50.690219 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:49:50.690223 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:49:50.690228 | orchestrator | 2025-06-03 15:49:50.690232 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-03 15:49:50.690237 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-03 15:49:50.690242 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-03 15:49:50.690251 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-03 15:49:50.690256 | orchestrator | 2025-06-03 15:49:50.690261 | orchestrator | 2025-06-03 15:49:50.690265 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-03 15:49:50.690270 | orchestrator | Tuesday 03 June 2025 15:49:47 +0000 (0:00:15.800) 0:01:59.400 ********** 2025-06-03 15:49:50.690274 | orchestrator | =============================================================================== 2025-06-03 15:49:50.690279 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 20.32s 2025-06-03 15:49:50.690287 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 15.80s 2025-06-03 15:49:50.690292 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 14.54s 2025-06-03 15:49:50.690296 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 6.26s 2025-06-03 15:49:50.690301 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 4.54s 2025-06-03 15:49:50.690306 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 4.10s 2025-06-03 15:49:50.690310 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 4.02s 2025-06-03 15:49:50.690315 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.86s 2025-06-03 15:49:50.690319 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 3.79s 2025-06-03 15:49:50.690324 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 3.55s 2025-06-03 15:49:50.690328 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.44s 2025-06-03 15:49:50.690333 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.30s 2025-06-03 15:49:50.690338 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.28s 2025-06-03 15:49:50.690342 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.19s 2025-06-03 15:49:50.690347 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 3.08s 2025-06-03 15:49:50.690351 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 3.05s 2025-06-03 15:49:50.690356 | orchestrator | magnum : Check magnum containers ---------------------------------------- 2.46s 2025-06-03 15:49:50.690360 | orchestrator | magnum : Check if kubeconfig file is supplied --------------------------- 2.38s 2025-06-03 15:49:50.690365 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.29s 2025-06-03 15:49:50.690369 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.17s 2025-06-03 15:49:50.690374 | orchestrator | 2025-06-03 15:49:50 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:49:53.735855 | orchestrator | 2025-06-03 15:49:53 | INFO  | Task d95376f6-aabb-48d0-8cff-676bc02c4743 is in state STARTED 2025-06-03 15:49:53.739001 | orchestrator | 2025-06-03 15:49:53 | INFO  | Task b09c2362-48b4-42a1-9ec5-64485050a7df is in state STARTED 2025-06-03 15:49:53.740908 | orchestrator | 2025-06-03 15:49:53 | INFO  | Task 88f15d3f-7f29-447d-a26c-c055d2bc5000 is in state STARTED 2025-06-03 15:49:53.742911 | orchestrator | 2025-06-03 15:49:53 | INFO  | Task 06bf7594-82cc-4f39-a568-16db6170ae64 is in state STARTED 2025-06-03 15:49:53.742947 | orchestrator | 2025-06-03 15:49:53 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:49:56.777411 | orchestrator | 2025-06-03 15:49:56 | INFO  | Task d95376f6-aabb-48d0-8cff-676bc02c4743 is in state STARTED 2025-06-03 15:49:56.779919 | orchestrator | 2025-06-03 15:49:56 | INFO  | Task b09c2362-48b4-42a1-9ec5-64485050a7df is in state STARTED 2025-06-03 15:49:56.781315 | orchestrator | 2025-06-03 15:49:56 | INFO  | Task 88f15d3f-7f29-447d-a26c-c055d2bc5000 is in state STARTED 2025-06-03 15:49:56.785031 | orchestrator | 2025-06-03 15:49:56 | INFO  | Task 06bf7594-82cc-4f39-a568-16db6170ae64 is in state STARTED 2025-06-03 15:49:56.785088 | orchestrator | 2025-06-03 15:49:56 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:49:59.822254 | orchestrator | 2025-06-03 15:49:59 | INFO  | Task d95376f6-aabb-48d0-8cff-676bc02c4743 is in state STARTED 2025-06-03 15:49:59.823509 | orchestrator | 2025-06-03 15:49:59 | INFO  | Task b09c2362-48b4-42a1-9ec5-64485050a7df is in state STARTED 2025-06-03 15:49:59.825794 | orchestrator | 2025-06-03 15:49:59 | INFO  | Task 88f15d3f-7f29-447d-a26c-c055d2bc5000 is in state STARTED 2025-06-03 15:49:59.827399 | orchestrator | 2025-06-03 15:49:59 | INFO  | Task 06bf7594-82cc-4f39-a568-16db6170ae64 is in state STARTED 2025-06-03 15:49:59.827752 | orchestrator | 2025-06-03 15:49:59 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:50:02.868332 | orchestrator | 2025-06-03 15:50:02 | INFO  | Task d95376f6-aabb-48d0-8cff-676bc02c4743 is in state STARTED 2025-06-03 15:50:02.872853 | orchestrator | 2025-06-03 15:50:02 | INFO  | Task b09c2362-48b4-42a1-9ec5-64485050a7df is in state STARTED 2025-06-03 15:50:02.872927 | orchestrator | 2025-06-03 15:50:02 | INFO  | Task 88f15d3f-7f29-447d-a26c-c055d2bc5000 is in state SUCCESS 2025-06-03 15:50:02.872940 | orchestrator | 2025-06-03 15:50:02 | INFO  | Task 4e136a26-1ab3-44f8-baba-4f2e5430b93c is in state STARTED 2025-06-03 15:50:02.873944 | orchestrator | 2025-06-03 15:50:02 | INFO  | Task 06bf7594-82cc-4f39-a568-16db6170ae64 is in state STARTED 2025-06-03 15:50:02.874262 | orchestrator | 2025-06-03 15:50:02 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:50:05.921928 | orchestrator | 2025-06-03 15:50:05 | INFO  | Task d95376f6-aabb-48d0-8cff-676bc02c4743 is in state STARTED 2025-06-03 15:50:05.923692 | orchestrator | 2025-06-03 15:50:05 | INFO  | Task b09c2362-48b4-42a1-9ec5-64485050a7df is in state STARTED 2025-06-03 15:50:05.925214 | orchestrator | 2025-06-03 15:50:05 | INFO  | Task 4e136a26-1ab3-44f8-baba-4f2e5430b93c is in state STARTED 2025-06-03 15:50:05.926453 | orchestrator | 2025-06-03 15:50:05 | INFO  | Task 06bf7594-82cc-4f39-a568-16db6170ae64 is in state STARTED 2025-06-03 15:50:05.926693 | orchestrator | 2025-06-03 15:50:05 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:50:08.984605 | orchestrator | 2025-06-03 15:50:08 | INFO  | Task d95376f6-aabb-48d0-8cff-676bc02c4743 is in state STARTED 2025-06-03 15:50:08.988829 | orchestrator | 2025-06-03 15:50:08 | INFO  | Task b09c2362-48b4-42a1-9ec5-64485050a7df is in state STARTED 2025-06-03 15:50:08.991737 | orchestrator | 2025-06-03 15:50:08 | INFO  | Task 4e136a26-1ab3-44f8-baba-4f2e5430b93c is in state STARTED 2025-06-03 15:50:08.993623 | orchestrator | 2025-06-03 15:50:08 | INFO  | Task 06bf7594-82cc-4f39-a568-16db6170ae64 is in state STARTED 2025-06-03 15:50:08.993661 | orchestrator | 2025-06-03 15:50:08 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:50:12.046312 | orchestrator | 2025-06-03 15:50:12 | INFO  | Task d95376f6-aabb-48d0-8cff-676bc02c4743 is in state STARTED 2025-06-03 15:50:12.047163 | orchestrator | 2025-06-03 15:50:12 | INFO  | Task b09c2362-48b4-42a1-9ec5-64485050a7df is in state STARTED 2025-06-03 15:50:12.048064 | orchestrator | 2025-06-03 15:50:12 | INFO  | Task 4e136a26-1ab3-44f8-baba-4f2e5430b93c is in state STARTED 2025-06-03 15:50:12.048863 | orchestrator | 2025-06-03 15:50:12 | INFO  | Task 06bf7594-82cc-4f39-a568-16db6170ae64 is in state STARTED 2025-06-03 15:50:12.048882 | orchestrator | 2025-06-03 15:50:12 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:50:15.086258 | orchestrator | 2025-06-03 15:50:15 | INFO  | Task d95376f6-aabb-48d0-8cff-676bc02c4743 is in state STARTED 2025-06-03 15:50:15.086419 | orchestrator | 2025-06-03 15:50:15 | INFO  | Task b09c2362-48b4-42a1-9ec5-64485050a7df is in state STARTED 2025-06-03 15:50:15.087895 | orchestrator | 2025-06-03 15:50:15 | INFO  | Task 4e136a26-1ab3-44f8-baba-4f2e5430b93c is in state STARTED 2025-06-03 15:50:15.089529 | orchestrator | 2025-06-03 15:50:15 | INFO  | Task 06bf7594-82cc-4f39-a568-16db6170ae64 is in state STARTED 2025-06-03 15:50:15.089886 | orchestrator | 2025-06-03 15:50:15 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:50:18.138130 | orchestrator | 2025-06-03 15:50:18 | INFO  | Task d95376f6-aabb-48d0-8cff-676bc02c4743 is in state STARTED 2025-06-03 15:50:18.139775 | orchestrator | 2025-06-03 15:50:18 | INFO  | Task b09c2362-48b4-42a1-9ec5-64485050a7df is in state STARTED 2025-06-03 15:50:18.147255 | orchestrator | 2025-06-03 15:50:18 | INFO  | Task 4e136a26-1ab3-44f8-baba-4f2e5430b93c is in state STARTED 2025-06-03 15:50:18.149138 | orchestrator | 2025-06-03 15:50:18 | INFO  | Task 06bf7594-82cc-4f39-a568-16db6170ae64 is in state STARTED 2025-06-03 15:50:18.149199 | orchestrator | 2025-06-03 15:50:18 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:50:21.189064 | orchestrator | 2025-06-03 15:50:21 | INFO  | Task d95376f6-aabb-48d0-8cff-676bc02c4743 is in state STARTED 2025-06-03 15:50:21.190133 | orchestrator | 2025-06-03 15:50:21 | INFO  | Task b09c2362-48b4-42a1-9ec5-64485050a7df is in state STARTED 2025-06-03 15:50:21.191234 | orchestrator | 2025-06-03 15:50:21 | INFO  | Task 4e136a26-1ab3-44f8-baba-4f2e5430b93c is in state STARTED 2025-06-03 15:50:21.192997 | orchestrator | 2025-06-03 15:50:21 | INFO  | Task 06bf7594-82cc-4f39-a568-16db6170ae64 is in state STARTED 2025-06-03 15:50:21.193035 | orchestrator | 2025-06-03 15:50:21 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:50:24.247036 | orchestrator | 2025-06-03 15:50:24 | INFO  | Task d95376f6-aabb-48d0-8cff-676bc02c4743 is in state STARTED 2025-06-03 15:50:24.247858 | orchestrator | 2025-06-03 15:50:24 | INFO  | Task b09c2362-48b4-42a1-9ec5-64485050a7df is in state STARTED 2025-06-03 15:50:24.248851 | orchestrator | 2025-06-03 15:50:24 | INFO  | Task 4e136a26-1ab3-44f8-baba-4f2e5430b93c is in state STARTED 2025-06-03 15:50:24.255209 | orchestrator | 2025-06-03 15:50:24 | INFO  | Task 06bf7594-82cc-4f39-a568-16db6170ae64 is in state STARTED 2025-06-03 15:50:24.255268 | orchestrator | 2025-06-03 15:50:24 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:50:27.303880 | orchestrator | 2025-06-03 15:50:27 | INFO  | Task d95376f6-aabb-48d0-8cff-676bc02c4743 is in state STARTED 2025-06-03 15:50:27.305641 | orchestrator | 2025-06-03 15:50:27 | INFO  | Task b09c2362-48b4-42a1-9ec5-64485050a7df is in state STARTED 2025-06-03 15:50:27.307359 | orchestrator | 2025-06-03 15:50:27 | INFO  | Task 4e136a26-1ab3-44f8-baba-4f2e5430b93c is in state STARTED 2025-06-03 15:50:27.309840 | orchestrator | 2025-06-03 15:50:27 | INFO  | Task 06bf7594-82cc-4f39-a568-16db6170ae64 is in state STARTED 2025-06-03 15:50:27.309883 | orchestrator | 2025-06-03 15:50:27 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:50:30.362777 | orchestrator | 2025-06-03 15:50:30 | INFO  | Task d95376f6-aabb-48d0-8cff-676bc02c4743 is in state STARTED 2025-06-03 15:50:30.364843 | orchestrator | 2025-06-03 15:50:30 | INFO  | Task b09c2362-48b4-42a1-9ec5-64485050a7df is in state STARTED 2025-06-03 15:50:30.368644 | orchestrator | 2025-06-03 15:50:30 | INFO  | Task 4e136a26-1ab3-44f8-baba-4f2e5430b93c is in state STARTED 2025-06-03 15:50:30.375074 | orchestrator | 2025-06-03 15:50:30 | INFO  | Task 06bf7594-82cc-4f39-a568-16db6170ae64 is in state STARTED 2025-06-03 15:50:30.375169 | orchestrator | 2025-06-03 15:50:30 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:50:33.409735 | orchestrator | 2025-06-03 15:50:33 | INFO  | Task d95376f6-aabb-48d0-8cff-676bc02c4743 is in state STARTED 2025-06-03 15:50:33.411142 | orchestrator | 2025-06-03 15:50:33 | INFO  | Task b09c2362-48b4-42a1-9ec5-64485050a7df is in state STARTED 2025-06-03 15:50:33.412261 | orchestrator | 2025-06-03 15:50:33 | INFO  | Task 4e136a26-1ab3-44f8-baba-4f2e5430b93c is in state STARTED 2025-06-03 15:50:33.414686 | orchestrator | 2025-06-03 15:50:33 | INFO  | Task 06bf7594-82cc-4f39-a568-16db6170ae64 is in state STARTED 2025-06-03 15:50:33.414941 | orchestrator | 2025-06-03 15:50:33 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:50:36.450242 | orchestrator | 2025-06-03 15:50:36 | INFO  | Task d95376f6-aabb-48d0-8cff-676bc02c4743 is in state STARTED 2025-06-03 15:50:36.450574 | orchestrator | 2025-06-03 15:50:36 | INFO  | Task b09c2362-48b4-42a1-9ec5-64485050a7df is in state STARTED 2025-06-03 15:50:36.451238 | orchestrator | 2025-06-03 15:50:36 | INFO  | Task 4e136a26-1ab3-44f8-baba-4f2e5430b93c is in state STARTED 2025-06-03 15:50:36.452022 | orchestrator | 2025-06-03 15:50:36 | INFO  | Task 06bf7594-82cc-4f39-a568-16db6170ae64 is in state STARTED 2025-06-03 15:50:36.452031 | orchestrator | 2025-06-03 15:50:36 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:50:39.481564 | orchestrator | 2025-06-03 15:50:39 | INFO  | Task d95376f6-aabb-48d0-8cff-676bc02c4743 is in state STARTED 2025-06-03 15:50:39.481694 | orchestrator | 2025-06-03 15:50:39 | INFO  | Task b09c2362-48b4-42a1-9ec5-64485050a7df is in state STARTED 2025-06-03 15:50:39.481709 | orchestrator | 2025-06-03 15:50:39 | INFO  | Task 4e136a26-1ab3-44f8-baba-4f2e5430b93c is in state STARTED 2025-06-03 15:50:39.481719 | orchestrator | 2025-06-03 15:50:39 | INFO  | Task 06bf7594-82cc-4f39-a568-16db6170ae64 is in state STARTED 2025-06-03 15:50:39.481728 | orchestrator | 2025-06-03 15:50:39 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:50:42.518132 | orchestrator | 2025-06-03 15:50:42 | INFO  | Task d95376f6-aabb-48d0-8cff-676bc02c4743 is in state STARTED 2025-06-03 15:50:42.518284 | orchestrator | 2025-06-03 15:50:42 | INFO  | Task b09c2362-48b4-42a1-9ec5-64485050a7df is in state STARTED 2025-06-03 15:50:42.518910 | orchestrator | 2025-06-03 15:50:42 | INFO  | Task 4e136a26-1ab3-44f8-baba-4f2e5430b93c is in state STARTED 2025-06-03 15:50:42.519566 | orchestrator | 2025-06-03 15:50:42 | INFO  | Task 06bf7594-82cc-4f39-a568-16db6170ae64 is in state STARTED 2025-06-03 15:50:42.519592 | orchestrator | 2025-06-03 15:50:42 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:50:45.569727 | orchestrator | 2025-06-03 15:50:45 | INFO  | Task d95376f6-aabb-48d0-8cff-676bc02c4743 is in state STARTED 2025-06-03 15:50:45.572651 | orchestrator | 2025-06-03 15:50:45 | INFO  | Task b09c2362-48b4-42a1-9ec5-64485050a7df is in state STARTED 2025-06-03 15:50:45.577240 | orchestrator | 2025-06-03 15:50:45 | INFO  | Task 4e136a26-1ab3-44f8-baba-4f2e5430b93c is in state STARTED 2025-06-03 15:50:45.578929 | orchestrator | 2025-06-03 15:50:45 | INFO  | Task 06bf7594-82cc-4f39-a568-16db6170ae64 is in state STARTED 2025-06-03 15:50:45.579023 | orchestrator | 2025-06-03 15:50:45 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:50:48.604240 | orchestrator | 2025-06-03 15:50:48 | INFO  | Task d95376f6-aabb-48d0-8cff-676bc02c4743 is in state STARTED 2025-06-03 15:50:48.604523 | orchestrator | 2025-06-03 15:50:48 | INFO  | Task b09c2362-48b4-42a1-9ec5-64485050a7df is in state STARTED 2025-06-03 15:50:48.606743 | orchestrator | 2025-06-03 15:50:48 | INFO  | Task 4e136a26-1ab3-44f8-baba-4f2e5430b93c is in state STARTED 2025-06-03 15:50:48.607795 | orchestrator | 2025-06-03 15:50:48.607861 | orchestrator | 2025-06-03 15:50:48.607875 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-03 15:50:48.607889 | orchestrator | 2025-06-03 15:50:48.607903 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-03 15:50:48.607917 | orchestrator | Tuesday 03 June 2025 15:49:28 +0000 (0:00:00.284) 0:00:00.284 ********** 2025-06-03 15:50:48.607944 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:50:48.607985 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:50:48.607998 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:50:48.608009 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:50:48.608022 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:50:48.608034 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:50:48.608047 | orchestrator | ok: [testbed-manager] 2025-06-03 15:50:48.608059 | orchestrator | 2025-06-03 15:50:48.608070 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-03 15:50:48.608083 | orchestrator | Tuesday 03 June 2025 15:49:29 +0000 (0:00:00.900) 0:00:01.184 ********** 2025-06-03 15:50:48.608095 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2025-06-03 15:50:48.608109 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2025-06-03 15:50:48.608122 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2025-06-03 15:50:48.608135 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2025-06-03 15:50:48.608147 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2025-06-03 15:50:48.608160 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2025-06-03 15:50:48.608174 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2025-06-03 15:50:48.608186 | orchestrator | 2025-06-03 15:50:48.608197 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-06-03 15:50:48.608210 | orchestrator | 2025-06-03 15:50:48.608224 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2025-06-03 15:50:48.608237 | orchestrator | Tuesday 03 June 2025 15:49:30 +0000 (0:00:00.768) 0:00:01.952 ********** 2025-06-03 15:50:48.608253 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2025-06-03 15:50:48.608268 | orchestrator | 2025-06-03 15:50:48.608280 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2025-06-03 15:50:48.608293 | orchestrator | Tuesday 03 June 2025 15:49:32 +0000 (0:00:01.471) 0:00:03.423 ********** 2025-06-03 15:50:48.608301 | orchestrator | changed: [testbed-node-0] => (item=swift (object-store)) 2025-06-03 15:50:48.608309 | orchestrator | 2025-06-03 15:50:48.608317 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2025-06-03 15:50:48.608325 | orchestrator | Tuesday 03 June 2025 15:49:35 +0000 (0:00:03.520) 0:00:06.944 ********** 2025-06-03 15:50:48.608334 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2025-06-03 15:50:48.608344 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2025-06-03 15:50:48.608352 | orchestrator | 2025-06-03 15:50:48.608362 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2025-06-03 15:50:48.608370 | orchestrator | Tuesday 03 June 2025 15:49:41 +0000 (0:00:06.344) 0:00:13.288 ********** 2025-06-03 15:50:48.608380 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-03 15:50:48.608390 | orchestrator | 2025-06-03 15:50:48.608399 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2025-06-03 15:50:48.608408 | orchestrator | Tuesday 03 June 2025 15:49:45 +0000 (0:00:03.420) 0:00:16.709 ********** 2025-06-03 15:50:48.608417 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-03 15:50:48.608448 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service) 2025-06-03 15:50:48.608457 | orchestrator | 2025-06-03 15:50:48.608466 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2025-06-03 15:50:48.608475 | orchestrator | Tuesday 03 June 2025 15:49:49 +0000 (0:00:04.248) 0:00:20.958 ********** 2025-06-03 15:50:48.608484 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-03 15:50:48.608493 | orchestrator | changed: [testbed-node-0] => (item=ResellerAdmin) 2025-06-03 15:50:48.608502 | orchestrator | 2025-06-03 15:50:48.608511 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2025-06-03 15:50:48.608520 | orchestrator | Tuesday 03 June 2025 15:49:55 +0000 (0:00:06.310) 0:00:27.268 ********** 2025-06-03 15:50:48.608529 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service -> admin) 2025-06-03 15:50:48.608537 | orchestrator | 2025-06-03 15:50:48.608546 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-03 15:50:48.608556 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-03 15:50:48.608566 | orchestrator | testbed-node-0 : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-03 15:50:48.608576 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-03 15:50:48.608585 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-03 15:50:48.608594 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-03 15:50:48.608617 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-03 15:50:48.608626 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-03 15:50:48.608635 | orchestrator | 2025-06-03 15:50:48.608644 | orchestrator | 2025-06-03 15:50:48.608653 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-03 15:50:48.608662 | orchestrator | Tuesday 03 June 2025 15:50:00 +0000 (0:00:04.929) 0:00:32.198 ********** 2025-06-03 15:50:48.608670 | orchestrator | =============================================================================== 2025-06-03 15:50:48.608680 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 6.34s 2025-06-03 15:50:48.608688 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 6.31s 2025-06-03 15:50:48.608698 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 4.93s 2025-06-03 15:50:48.608792 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 4.25s 2025-06-03 15:50:48.608807 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 3.52s 2025-06-03 15:50:48.608817 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 3.42s 2025-06-03 15:50:48.608826 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 1.47s 2025-06-03 15:50:48.608835 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.90s 2025-06-03 15:50:48.608844 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.77s 2025-06-03 15:50:48.608854 | orchestrator | 2025-06-03 15:50:48.608862 | orchestrator | 2025-06-03 15:50:48.608870 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2025-06-03 15:50:48.608877 | orchestrator | 2025-06-03 15:50:48.608885 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2025-06-03 15:50:48.608896 | orchestrator | Tuesday 03 June 2025 15:44:35 +0000 (0:00:00.098) 0:00:00.098 ********** 2025-06-03 15:50:48.608910 | orchestrator | changed: [localhost] 2025-06-03 15:50:48.608929 | orchestrator | 2025-06-03 15:50:48.608979 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2025-06-03 15:50:48.608993 | orchestrator | Tuesday 03 June 2025 15:44:36 +0000 (0:00:01.436) 0:00:01.534 ********** 2025-06-03 15:50:48.609006 | orchestrator | 2025-06-03 15:50:48.609020 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-06-03 15:50:48.609033 | orchestrator | 2025-06-03 15:50:48.609045 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-06-03 15:50:48.609057 | orchestrator | 2025-06-03 15:50:48.609070 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-06-03 15:50:48.609081 | orchestrator | 2025-06-03 15:50:48.609094 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-06-03 15:50:48.609106 | orchestrator | 2025-06-03 15:50:48.609119 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-06-03 15:50:48.609131 | orchestrator | 2025-06-03 15:50:48.609145 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-06-03 15:50:48.609157 | orchestrator | 2025-06-03 15:50:48.609170 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-06-03 15:50:48.609182 | orchestrator | 2025-06-03 15:50:48.609196 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-06-03 15:50:48.609209 | orchestrator | changed: [localhost] 2025-06-03 15:50:48.609223 | orchestrator | 2025-06-03 15:50:48.609235 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2025-06-03 15:50:48.609248 | orchestrator | Tuesday 03 June 2025 15:50:36 +0000 (0:05:59.317) 0:06:00.851 ********** 2025-06-03 15:50:48.609261 | orchestrator | changed: [localhost] 2025-06-03 15:50:48.609273 | orchestrator | 2025-06-03 15:50:48.609286 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-03 15:50:48.609299 | orchestrator | 2025-06-03 15:50:48.609313 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-03 15:50:48.609327 | orchestrator | Tuesday 03 June 2025 15:50:45 +0000 (0:00:08.945) 0:06:09.797 ********** 2025-06-03 15:50:48.609339 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:50:48.609352 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:50:48.609365 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:50:48.609379 | orchestrator | 2025-06-03 15:50:48.609392 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-03 15:50:48.609414 | orchestrator | Tuesday 03 June 2025 15:50:45 +0000 (0:00:00.706) 0:06:10.504 ********** 2025-06-03 15:50:48.609428 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2025-06-03 15:50:48.609442 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2025-06-03 15:50:48.609456 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2025-06-03 15:50:48.609470 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2025-06-03 15:50:48.609484 | orchestrator | 2025-06-03 15:50:48.609498 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2025-06-03 15:50:48.609512 | orchestrator | skipping: no hosts matched 2025-06-03 15:50:48.609527 | orchestrator | 2025-06-03 15:50:48.609540 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-03 15:50:48.609554 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-03 15:50:48.609564 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-03 15:50:48.609572 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-03 15:50:48.609580 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-03 15:50:48.609588 | orchestrator | 2025-06-03 15:50:48.609596 | orchestrator | 2025-06-03 15:50:48.609604 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-03 15:50:48.609633 | orchestrator | Tuesday 03 June 2025 15:50:47 +0000 (0:00:01.256) 0:06:11.761 ********** 2025-06-03 15:50:48.609641 | orchestrator | =============================================================================== 2025-06-03 15:50:48.609649 | orchestrator | Download ironic-agent initramfs --------------------------------------- 359.32s 2025-06-03 15:50:48.609657 | orchestrator | Download ironic-agent kernel -------------------------------------------- 8.95s 2025-06-03 15:50:48.609665 | orchestrator | Ensure the destination directory exists --------------------------------- 1.44s 2025-06-03 15:50:48.609674 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.26s 2025-06-03 15:50:48.609681 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.71s 2025-06-03 15:50:48.609689 | orchestrator | 2025-06-03 15:50:48 | INFO  | Task 06bf7594-82cc-4f39-a568-16db6170ae64 is in state SUCCESS 2025-06-03 15:50:48.609697 | orchestrator | 2025-06-03 15:50:48 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:50:51.635609 | orchestrator | 2025-06-03 15:50:51 | INFO  | Task d95376f6-aabb-48d0-8cff-676bc02c4743 is in state STARTED 2025-06-03 15:50:51.637329 | orchestrator | 2025-06-03 15:50:51 | INFO  | Task b09c2362-48b4-42a1-9ec5-64485050a7df is in state STARTED 2025-06-03 15:50:51.638136 | orchestrator | 2025-06-03 15:50:51 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:50:51.639734 | orchestrator | 2025-06-03 15:50:51 | INFO  | Task 4e136a26-1ab3-44f8-baba-4f2e5430b93c is in state STARTED 2025-06-03 15:50:51.641028 | orchestrator | 2025-06-03 15:50:51 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:50:54.670461 | orchestrator | 2025-06-03 15:50:54 | INFO  | Task d95376f6-aabb-48d0-8cff-676bc02c4743 is in state STARTED 2025-06-03 15:50:54.670814 | orchestrator | 2025-06-03 15:50:54 | INFO  | Task b09c2362-48b4-42a1-9ec5-64485050a7df is in state STARTED 2025-06-03 15:50:54.671425 | orchestrator | 2025-06-03 15:50:54 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:50:54.672247 | orchestrator | 2025-06-03 15:50:54 | INFO  | Task 4e136a26-1ab3-44f8-baba-4f2e5430b93c is in state STARTED 2025-06-03 15:50:54.672270 | orchestrator | 2025-06-03 15:50:54 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:50:57.706583 | orchestrator | 2025-06-03 15:50:57 | INFO  | Task d95376f6-aabb-48d0-8cff-676bc02c4743 is in state STARTED 2025-06-03 15:50:57.706873 | orchestrator | 2025-06-03 15:50:57 | INFO  | Task b09c2362-48b4-42a1-9ec5-64485050a7df is in state STARTED 2025-06-03 15:50:57.708196 | orchestrator | 2025-06-03 15:50:57 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:50:57.712046 | orchestrator | 2025-06-03 15:50:57 | INFO  | Task 4e136a26-1ab3-44f8-baba-4f2e5430b93c is in state STARTED 2025-06-03 15:50:57.712120 | orchestrator | 2025-06-03 15:50:57 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:51:00.739246 | orchestrator | 2025-06-03 15:51:00 | INFO  | Task d95376f6-aabb-48d0-8cff-676bc02c4743 is in state STARTED 2025-06-03 15:51:00.741210 | orchestrator | 2025-06-03 15:51:00 | INFO  | Task b09c2362-48b4-42a1-9ec5-64485050a7df is in state SUCCESS 2025-06-03 15:51:00.746405 | orchestrator | 2025-06-03 15:51:00.746472 | orchestrator | 2025-06-03 15:51:00.746480 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-03 15:51:00.746519 | orchestrator | 2025-06-03 15:51:00.746525 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-03 15:51:00.746550 | orchestrator | Tuesday 03 June 2025 15:47:56 +0000 (0:00:00.327) 0:00:00.327 ********** 2025-06-03 15:51:00.746556 | orchestrator | ok: [testbed-manager] 2025-06-03 15:51:00.746562 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:51:00.746621 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:51:00.746626 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:51:00.746631 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:51:00.746636 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:51:00.746641 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:51:00.746646 | orchestrator | 2025-06-03 15:51:00.746651 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-03 15:51:00.746656 | orchestrator | Tuesday 03 June 2025 15:47:58 +0000 (0:00:01.217) 0:00:01.545 ********** 2025-06-03 15:51:00.746661 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2025-06-03 15:51:00.746666 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2025-06-03 15:51:00.746671 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2025-06-03 15:51:00.746676 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2025-06-03 15:51:00.746681 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2025-06-03 15:51:00.746685 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2025-06-03 15:51:00.746690 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2025-06-03 15:51:00.746695 | orchestrator | 2025-06-03 15:51:00.746700 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2025-06-03 15:51:00.746705 | orchestrator | 2025-06-03 15:51:00.746710 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-06-03 15:51:00.746714 | orchestrator | Tuesday 03 June 2025 15:47:58 +0000 (0:00:00.683) 0:00:02.229 ********** 2025-06-03 15:51:00.746720 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-03 15:51:00.746726 | orchestrator | 2025-06-03 15:51:00.746730 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2025-06-03 15:51:00.746736 | orchestrator | Tuesday 03 June 2025 15:48:00 +0000 (0:00:01.333) 0:00:03.563 ********** 2025-06-03 15:51:00.746743 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-03 15:51:00.746751 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-06-03 15:51:00.746758 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-03 15:51:00.746763 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-03 15:51:00.746794 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-03 15:51:00.746807 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-03 15:51:00.746855 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:51:00.746863 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:51:00.746954 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-03 15:51:00.747010 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:51:00.747020 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-03 15:51:00.747038 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-03 15:51:00.747059 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-03 15:51:00.747068 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:51:00.747078 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:51:00.747087 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-03 15:51:00.747097 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:51:00.747106 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-03 15:51:00.747115 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-06-03 15:51:00.747148 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-03 15:51:00.747164 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-03 15:51:00.747177 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-03 15:51:00.747185 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-03 15:51:00.747193 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-03 15:51:00.747201 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:51:00.747213 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:51:00.747221 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-03 15:51:00.747238 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:51:00.747247 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:51:00.747255 | orchestrator | 2025-06-03 15:51:00.747263 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-06-03 15:51:00.747272 | orchestrator | Tuesday 03 June 2025 15:48:02 +0000 (0:00:02.746) 0:00:06.309 ********** 2025-06-03 15:51:00.747279 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-03 15:51:00.747288 | orchestrator | 2025-06-03 15:51:00.747333 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2025-06-03 15:51:00.747342 | orchestrator | Tuesday 03 June 2025 15:48:04 +0000 (0:00:01.495) 0:00:07.805 ********** 2025-06-03 15:51:00.747352 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-06-03 15:51:00.747362 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-03 15:51:00.747376 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-03 15:51:00.747384 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-03 15:51:00.747398 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-03 15:51:00.747411 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-03 15:51:00.747420 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-03 15:51:00.747428 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-03 15:51:00.747436 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:51:00.747445 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:51:00.747466 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:51:00.747474 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-03 15:51:00.747487 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-03 15:51:00.747500 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-03 15:51:00.747508 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-03 15:51:00.747517 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:51:00.747525 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:51:00.747538 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-03 15:51:00.747547 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:51:00.747561 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-06-03 15:51:00.747574 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-03 15:51:00.747583 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-03 15:51:00.747591 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-03 15:51:00.747599 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-03 15:51:00.747613 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-03 15:51:00.747621 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:51:00.747629 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:51:00.748117 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:51:00.748147 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:51:00.748156 | orchestrator | 2025-06-03 15:51:00.748165 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2025-06-03 15:51:00.748173 | orchestrator | Tuesday 03 June 2025 15:48:10 +0000 (0:00:05.828) 0:00:13.633 ********** 2025-06-03 15:51:00.748182 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-03 15:51:00.748191 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:51:00.748209 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:51:00.748217 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-03 15:51:00.748225 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:51:00.748232 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-03 15:51:00.748252 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:51:00.748260 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:51:00.748268 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-03 15:51:00.748276 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:51:00.748290 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-06-03 15:51:00.748300 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-03 15:51:00.748308 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-03 15:51:00.748324 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-06-03 15:51:00.748333 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:51:00.748342 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-03 15:51:00.748355 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:51:00.748363 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:51:00.748371 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-03 15:51:00.748380 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:51:00.748388 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:51:00.748396 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:51:00.748404 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:51:00.748470 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:51:00.748489 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-03 15:51:00.748519 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-03 15:51:00.748593 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-03 15:51:00.748608 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:51:00.748616 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-03 15:51:00.748624 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-03 15:51:00.748633 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-03 15:51:00.748640 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:51:00.748648 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-03 15:51:00.748657 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-03 15:51:00.748675 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-03 15:51:00.748684 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:51:00.748692 | orchestrator | 2025-06-03 15:51:00.748700 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2025-06-03 15:51:00.748708 | orchestrator | Tuesday 03 June 2025 15:48:11 +0000 (0:00:01.592) 0:00:15.226 ********** 2025-06-03 15:51:00.748721 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-06-03 15:51:00.748729 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-03 15:51:00.748737 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-03 15:51:00.748746 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-06-03 15:51:00.748755 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:51:00.748777 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-03 15:51:00.748787 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:51:00.748802 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:51:00.748811 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-03 15:51:00.748820 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:51:00.748828 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:51:00.748837 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-03 15:51:00.748846 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:51:00.748854 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:51:00.748872 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-03 15:51:00.748995 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:51:00.749007 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:51:00.749016 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:51:00.749024 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-03 15:51:00.749032 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:51:00.749040 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:51:00.749048 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-03 15:51:00.749056 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-03 15:51:00.749064 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:51:00.749090 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-03 15:51:00.749110 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-03 15:51:00.749120 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-03 15:51:00.749128 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:51:00.749160 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-03 15:51:00.749170 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-03 15:51:00.749179 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-03 15:51:00.749186 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:51:00.749193 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-03 15:51:00.749201 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-03 15:51:00.749336 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-03 15:51:00.749351 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:51:00.749358 | orchestrator | 2025-06-03 15:51:00.749366 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2025-06-03 15:51:00.749373 | orchestrator | Tuesday 03 June 2025 15:48:13 +0000 (0:00:01.974) 0:00:17.200 ********** 2025-06-03 15:51:00.749382 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-06-03 15:51:00.749390 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-03 15:51:00.749398 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-03 15:51:00.749406 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-03 15:51:00.749413 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-03 15:51:00.749420 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-03 15:51:00.749439 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:51:00.749450 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:51:00.749458 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-03 15:51:00.749465 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:51:00.749472 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-03 15:51:00.749480 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-03 15:51:00.749487 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-03 15:51:00.749499 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:51:00.749511 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-03 15:51:00.749523 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:51:00.749531 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-03 15:51:00.749538 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:51:00.749546 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-03 15:51:00.749577 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-06-03 15:51:00.749592 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-03 15:51:00.749609 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-03 15:51:00.749618 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-03 15:51:00.749625 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-03 15:51:00.749633 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-03 15:51:00.749641 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:51:00.749649 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:51:00.749656 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:51:00.749669 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:51:00.749676 | orchestrator | 2025-06-03 15:51:00.749683 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2025-06-03 15:51:00.749690 | orchestrator | Tuesday 03 June 2025 15:48:20 +0000 (0:00:06.463) 0:00:23.664 ********** 2025-06-03 15:51:00.749698 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-03 15:51:00.749705 | orchestrator | 2025-06-03 15:51:00.749713 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2025-06-03 15:51:00.749724 | orchestrator | Tuesday 03 June 2025 15:48:21 +0000 (0:00:01.581) 0:00:25.245 ********** 2025-06-03 15:51:00.749741 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1311873, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.3208375, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:51:00.749749 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1311873, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.3208375, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:51:00.749757 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1311873, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.3208375, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:51:00.749764 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1311873, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.3208375, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:51:00.749771 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1311732, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.2718368, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:51:00.749787 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1311873, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.3208375, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:51:00.749798 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1311873, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.3208375, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-03 15:51:00.749809 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1311732, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.2718368, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:51:00.749817 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1311873, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.3208375, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:51:00.749825 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1311732, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.2718368, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:51:00.749832 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1311732, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.2718368, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:51:00.749844 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1311732, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.2718368, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:51:00.749851 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1311706, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.2538366, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:51:00.749863 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1311706, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.2538366, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:51:00.749876 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1311732, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.2718368, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:51:00.749883 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1311706, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.2538366, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:51:00.749891 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1311706, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.2538366, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:51:00.749899 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1311706, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.2538366, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:51:00.749911 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1311707, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.2538366, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:51:00.749918 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1311732, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.2718368, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-03 15:51:00.749926 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1311707, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.2538366, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:51:00.749960 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1311706, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.2538366, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:51:00.749999 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1311707, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.2538366, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:51:00.750009 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1311707, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.2538366, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:51:00.750055 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1311707, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.2538366, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:51:00.750101 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1311730, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.2708368, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:51:00.750111 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1311707, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.2538366, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:51:00.750152 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1311730, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.2708368, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:51:00.750672 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1311730, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.2708368, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:51:00.750741 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1311730, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.2708368, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:51:00.750751 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1311730, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.2708368, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:51:00.750777 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1311730, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.2708368, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:51:00.750784 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1311712, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.2558365, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:51:00.750791 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1311706, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.2538366, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-03 15:51:00.750799 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1311712, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.2558365, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:51:00.750824 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1311712, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.2558365, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:51:00.750832 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1311712, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.2558365, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:51:00.750839 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1311712, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.2558365, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:51:00.750851 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1311712, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.2558365, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:51:00.750858 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1311715, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.2568367, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:51:00.750865 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1311715, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.2568367, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:51:00.750872 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1311715, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.2568367, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:51:00.750887 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1311733, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.2718368, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:51:00.750894 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1311715, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.2568367, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:51:00.750901 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1311715, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.2568367, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:51:00.750913 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1311733, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.2718368, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:51:00.750920 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1311715, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.2568367, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:51:00.750927 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1311707, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.2538366, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-03 15:51:00.750934 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1311733, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.2718368, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:51:00.750988 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1311733, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.2718368, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:51:00.750998 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1311870, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.3198376, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:51:00.751006 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1311870, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.3198376, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:51:00.751018 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1311733, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.2718368, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:51:00.751025 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1311870, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.3198376, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:51:00.751032 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1311733, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.2718368, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:51:00.751038 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1311870, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.3198376, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:51:00.751059 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1311730, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.2708368, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-03 15:51:00.751067 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1311870, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.3198376, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:51:00.751078 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1311884, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.3258376, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:51:00.751085 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1311884, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.3258376, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:51:00.751092 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1311884, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.3258376, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:51:00.751099 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1311884, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.3258376, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:51:00.751105 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1311870, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.3198376, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:51:00.751120 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1311737, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.272837, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:51:00.751127 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1311884, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.3258376, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:51:00.751138 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1311737, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.272837, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:51:00.751145 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1311737, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.272837, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:51:00.751152 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1311884, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.3258376, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:51:00.751158 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1311737, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.272837, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:51:00.751165 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1311712, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.2558365, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-03 15:51:00.751179 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1311737, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.272837, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:51:00.751187 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1311710, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.2548366, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:51:00.751199 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1311710, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.2548366, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:51:00.751205 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1311710, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.2548366, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:51:00.751212 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1311710, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.2548366, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:51:00.751220 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1311737, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.272837, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:51:00.751228 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1311714, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.2558365, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:51:00.751247 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1311710, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.2548366, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:51:00.751263 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1311714, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.2558365, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:51:00.751271 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1311714, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.2558365, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:51:00.751278 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1311710, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.2548366, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:51:00.751287 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1311714, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.2558365, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:51:00.751294 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1311705, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.2538366, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:51:00.751302 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1311715, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.2568367, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-03 15:51:00.751318 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1311714, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.2558365, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:51:00.751332 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1311705, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.2538366, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:51:00.751341 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1311705, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.2538366, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:51:00.751349 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1311705, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.2538366, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:51:00.751356 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1311714, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.2558365, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:51:00.751363 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1311731, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.2708368, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:51:00.751370 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1311705, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.2538366, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:51:00.751384 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1311731, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.2708368, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:51:00.751396 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1311731, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.2708368, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:51:00.751403 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1311731, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.2708368, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:51:00.751410 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1311705, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.2538366, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:51:00.751416 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1311731, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.2708368, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:51:00.751423 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1311882, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.3258376, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:51:00.751430 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1311733, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.2718368, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-03 15:51:00.751443 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1311882, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.3258376, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:51:00.751458 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1311882, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.3258376, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:51:00.751465 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1311882, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.3258376, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:51:00.751472 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1311713, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.2558365, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:51:00.751479 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1311731, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.2708368, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:51:00.751496 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1311882, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.3258376, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:51:00.751502 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1311713, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.2558365, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:51:00.751521 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1311713, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.2558365, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:51:00.751529 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1311874, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.3218377, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:51:00.751536 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:51:00.751544 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1311713, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.2558365, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:51:00.751551 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1311874, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.3218377, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:51:00.751558 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:51:00.751565 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1311882, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.3258376, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:51:00.751572 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1311870, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.3198376, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-03 15:51:00.751578 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1311874, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.3218377, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:51:00.751596 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1311713, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.2558365, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:51:00.751605 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:51:00.751612 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1311874, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.3218377, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:51:00.751619 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:51:00.751625 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1311874, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.3218377, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:51:00.751631 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:51:00.751638 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1311713, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.2558365, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:51:00.751645 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1311874, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.3218377, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-03 15:51:00.751652 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:51:00.751659 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1311884, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.3258376, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-03 15:51:00.751671 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1311737, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.272837, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-03 15:51:00.751686 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1311710, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.2548366, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-03 15:51:00.751693 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1311714, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.2558365, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-03 15:51:00.751700 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1311705, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.2538366, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-03 15:51:00.751707 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1311731, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.2708368, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-03 15:51:00.751714 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1311882, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.3258376, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-03 15:51:00.751720 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1311713, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.2558365, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-03 15:51:00.751734 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1311874, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.3218377, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-03 15:51:00.751741 | orchestrator | 2025-06-03 15:51:00.751749 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2025-06-03 15:51:00.751756 | orchestrator | Tuesday 03 June 2025 15:48:48 +0000 (0:00:26.481) 0:00:51.727 ********** 2025-06-03 15:51:00.751763 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-03 15:51:00.751770 | orchestrator | 2025-06-03 15:51:00.751781 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2025-06-03 15:51:00.751788 | orchestrator | Tuesday 03 June 2025 15:48:49 +0000 (0:00:00.714) 0:00:52.442 ********** 2025-06-03 15:51:00.751794 | orchestrator | [WARNING]: Skipped 2025-06-03 15:51:00.751807 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-03 15:51:00.751814 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2025-06-03 15:51:00.751820 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-03 15:51:00.751827 | orchestrator | manager/prometheus.yml.d' is not a directory 2025-06-03 15:51:00.751835 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-03 15:51:00.751841 | orchestrator | [WARNING]: Skipped 2025-06-03 15:51:00.751848 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-03 15:51:00.751855 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2025-06-03 15:51:00.751861 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-03 15:51:00.751868 | orchestrator | node-0/prometheus.yml.d' is not a directory 2025-06-03 15:51:00.751874 | orchestrator | [WARNING]: Skipped 2025-06-03 15:51:00.751881 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-03 15:51:00.751888 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2025-06-03 15:51:00.751894 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-03 15:51:00.751901 | orchestrator | node-2/prometheus.yml.d' is not a directory 2025-06-03 15:51:00.751907 | orchestrator | [WARNING]: Skipped 2025-06-03 15:51:00.751914 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-03 15:51:00.751921 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2025-06-03 15:51:00.751927 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-03 15:51:00.751933 | orchestrator | node-1/prometheus.yml.d' is not a directory 2025-06-03 15:51:00.752025 | orchestrator | [WARNING]: Skipped 2025-06-03 15:51:00.752036 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-03 15:51:00.752043 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2025-06-03 15:51:00.752050 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-03 15:51:00.752057 | orchestrator | node-3/prometheus.yml.d' is not a directory 2025-06-03 15:51:00.752064 | orchestrator | [WARNING]: Skipped 2025-06-03 15:51:00.752079 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-03 15:51:00.752085 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2025-06-03 15:51:00.752092 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-03 15:51:00.752099 | orchestrator | node-4/prometheus.yml.d' is not a directory 2025-06-03 15:51:00.752106 | orchestrator | [WARNING]: Skipped 2025-06-03 15:51:00.752113 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-03 15:51:00.752119 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2025-06-03 15:51:00.752125 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-03 15:51:00.752132 | orchestrator | node-5/prometheus.yml.d' is not a directory 2025-06-03 15:51:00.752138 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-03 15:51:00.752145 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-06-03 15:51:00.752152 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-06-03 15:51:00.752159 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-06-03 15:51:00.752165 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-06-03 15:51:00.752172 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-06-03 15:51:00.752178 | orchestrator | 2025-06-03 15:51:00.752184 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2025-06-03 15:51:00.752191 | orchestrator | Tuesday 03 June 2025 15:48:51 +0000 (0:00:02.116) 0:00:54.558 ********** 2025-06-03 15:51:00.752198 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-06-03 15:51:00.752206 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-06-03 15:51:00.752213 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:51:00.752219 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:51:00.752226 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-06-03 15:51:00.752232 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:51:00.752239 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-06-03 15:51:00.752246 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:51:00.752252 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-06-03 15:51:00.752259 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:51:00.752265 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-06-03 15:51:00.752271 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:51:00.752278 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2025-06-03 15:51:00.752285 | orchestrator | 2025-06-03 15:51:00.752292 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2025-06-03 15:51:00.752298 | orchestrator | Tuesday 03 June 2025 15:49:04 +0000 (0:00:13.469) 0:01:08.028 ********** 2025-06-03 15:51:00.752313 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-06-03 15:51:00.752320 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-06-03 15:51:00.752327 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:51:00.752338 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:51:00.752345 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-06-03 15:51:00.752352 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:51:00.752358 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-06-03 15:51:00.752365 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-06-03 15:51:00.752372 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:51:00.752384 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:51:00.752391 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-06-03 15:51:00.752398 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:51:00.752404 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2025-06-03 15:51:00.752411 | orchestrator | 2025-06-03 15:51:00.752418 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2025-06-03 15:51:00.752424 | orchestrator | Tuesday 03 June 2025 15:49:07 +0000 (0:00:02.752) 0:01:10.780 ********** 2025-06-03 15:51:00.752431 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-06-03 15:51:00.752438 | orchestrator | skipping: 2025-06-03 15:51:00 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:51:00.752445 | orchestrator | 2025-06-03 15:51:00 | INFO  | Task 4e136a26-1ab3-44f8-baba-4f2e5430b93c is in state STARTED 2025-06-03 15:51:00.752452 | orchestrator | 2025-06-03 15:51:00 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:51:00.752459 | orchestrator | [testbed-node-0] 2025-06-03 15:51:00.752466 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-06-03 15:51:00.752472 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:51:00.752479 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-06-03 15:51:00.752485 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:51:00.752491 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-06-03 15:51:00.752498 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:51:00.752505 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-06-03 15:51:00.752511 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:51:00.752518 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2025-06-03 15:51:00.752524 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-06-03 15:51:00.752531 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:51:00.752537 | orchestrator | 2025-06-03 15:51:00.752544 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2025-06-03 15:51:00.752550 | orchestrator | Tuesday 03 June 2025 15:49:08 +0000 (0:00:01.261) 0:01:12.042 ********** 2025-06-03 15:51:00.752557 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-03 15:51:00.752563 | orchestrator | 2025-06-03 15:51:00.752570 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2025-06-03 15:51:00.752577 | orchestrator | Tuesday 03 June 2025 15:49:09 +0000 (0:00:00.674) 0:01:12.716 ********** 2025-06-03 15:51:00.752584 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:51:00.752590 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:51:00.752596 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:51:00.752603 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:51:00.752610 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:51:00.752617 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:51:00.752624 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:51:00.752630 | orchestrator | 2025-06-03 15:51:00.752636 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2025-06-03 15:51:00.752709 | orchestrator | Tuesday 03 June 2025 15:49:10 +0000 (0:00:00.749) 0:01:13.466 ********** 2025-06-03 15:51:00.752720 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:51:00.752726 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:51:00.752740 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:51:00.752748 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:51:00.752754 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:51:00.752760 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:51:00.752767 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:51:00.752773 | orchestrator | 2025-06-03 15:51:00.752780 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2025-06-03 15:51:00.752786 | orchestrator | Tuesday 03 June 2025 15:49:12 +0000 (0:00:01.917) 0:01:15.383 ********** 2025-06-03 15:51:00.752793 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-06-03 15:51:00.752800 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:51:00.752806 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-06-03 15:51:00.752819 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:51:00.752826 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-06-03 15:51:00.752832 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:51:00.752843 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-06-03 15:51:00.752850 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:51:00.752856 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-06-03 15:51:00.752863 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:51:00.752869 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-06-03 15:51:00.752875 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:51:00.752882 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-06-03 15:51:00.752888 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:51:00.752895 | orchestrator | 2025-06-03 15:51:00.752901 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2025-06-03 15:51:00.752908 | orchestrator | Tuesday 03 June 2025 15:49:14 +0000 (0:00:02.246) 0:01:17.630 ********** 2025-06-03 15:51:00.752914 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-06-03 15:51:00.752921 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:51:00.752927 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2025-06-03 15:51:00.752934 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-06-03 15:51:00.752961 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:51:00.752968 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-06-03 15:51:00.752975 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:51:00.752982 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-06-03 15:51:00.752988 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:51:00.752994 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-06-03 15:51:00.753001 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:51:00.753008 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-06-03 15:51:00.753014 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:51:00.753021 | orchestrator | 2025-06-03 15:51:00.753028 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2025-06-03 15:51:00.753034 | orchestrator | Tuesday 03 June 2025 15:49:16 +0000 (0:00:01.822) 0:01:19.452 ********** 2025-06-03 15:51:00.753040 | orchestrator | [WARNING]: Skipped 2025-06-03 15:51:00.753047 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2025-06-03 15:51:00.753060 | orchestrator | due to this access issue: 2025-06-03 15:51:00.753066 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2025-06-03 15:51:00.753072 | orchestrator | not a directory 2025-06-03 15:51:00.753078 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-03 15:51:00.753085 | orchestrator | 2025-06-03 15:51:00.753092 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2025-06-03 15:51:00.753098 | orchestrator | Tuesday 03 June 2025 15:49:17 +0000 (0:00:01.151) 0:01:20.604 ********** 2025-06-03 15:51:00.753105 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:51:00.753111 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:51:00.753117 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:51:00.753123 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:51:00.753130 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:51:00.753136 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:51:00.753142 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:51:00.753149 | orchestrator | 2025-06-03 15:51:00.753155 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2025-06-03 15:51:00.753161 | orchestrator | Tuesday 03 June 2025 15:49:18 +0000 (0:00:00.895) 0:01:21.499 ********** 2025-06-03 15:51:00.753168 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:51:00.753174 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:51:00.753181 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:51:00.753187 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:51:00.753194 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:51:00.753200 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:51:00.753206 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:51:00.753212 | orchestrator | 2025-06-03 15:51:00.753219 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2025-06-03 15:51:00.753225 | orchestrator | Tuesday 03 June 2025 15:49:18 +0000 (0:00:00.737) 0:01:22.237 ********** 2025-06-03 15:51:00.753240 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-06-03 15:51:00.753252 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-03 15:51:00.753259 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-03 15:51:00.753266 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-03 15:51:00.753280 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-03 15:51:00.753287 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-03 15:51:00.753294 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-03 15:51:00.753301 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-03 15:51:00.753313 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:51:00.753323 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-03 15:51:00.753330 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:51:00.753342 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-03 15:51:00.753350 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-03 15:51:00.753357 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-03 15:51:00.753364 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:51:00.753375 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-06-03 15:51:00.753387 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:51:00.753394 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:51:00.753406 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-03 15:51:00.753413 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:51:00.753420 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-03 15:51:00.753427 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-03 15:51:00.753434 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:51:00.753445 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-03 15:51:00.753455 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-03 15:51:00.753462 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-03 15:51:00.753473 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:51:00.753480 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:51:00.753487 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-03 15:51:00.753494 | orchestrator | 2025-06-03 15:51:00.753501 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2025-06-03 15:51:00.753507 | orchestrator | Tuesday 03 June 2025 15:49:22 +0000 (0:00:04.091) 0:01:26.328 ********** 2025-06-03 15:51:00.753514 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-06-03 15:51:00.753521 | orchestrator | skipping: [testbed-manager] 2025-06-03 15:51:00.753527 | orchestrator | 2025-06-03 15:51:00.753534 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-06-03 15:51:00.753540 | orchestrator | Tuesday 03 June 2025 15:49:24 +0000 (0:00:01.043) 0:01:27.371 ********** 2025-06-03 15:51:00.753546 | orchestrator | 2025-06-03 15:51:00.753553 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-06-03 15:51:00.753559 | orchestrator | Tuesday 03 June 2025 15:49:24 +0000 (0:00:00.195) 0:01:27.567 ********** 2025-06-03 15:51:00.753566 | orchestrator | 2025-06-03 15:51:00.753572 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-06-03 15:51:00.753579 | orchestrator | Tuesday 03 June 2025 15:49:24 +0000 (0:00:00.059) 0:01:27.626 ********** 2025-06-03 15:51:00.753585 | orchestrator | 2025-06-03 15:51:00.753591 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-06-03 15:51:00.753597 | orchestrator | Tuesday 03 June 2025 15:49:24 +0000 (0:00:00.059) 0:01:27.686 ********** 2025-06-03 15:51:00.753604 | orchestrator | 2025-06-03 15:51:00.753610 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-06-03 15:51:00.753616 | orchestrator | Tuesday 03 June 2025 15:49:24 +0000 (0:00:00.059) 0:01:27.745 ********** 2025-06-03 15:51:00.753623 | orchestrator | 2025-06-03 15:51:00.753629 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-06-03 15:51:00.753635 | orchestrator | Tuesday 03 June 2025 15:49:24 +0000 (0:00:00.055) 0:01:27.801 ********** 2025-06-03 15:51:00.753641 | orchestrator | 2025-06-03 15:51:00.753648 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-06-03 15:51:00.753665 | orchestrator | Tuesday 03 June 2025 15:49:24 +0000 (0:00:00.060) 0:01:27.861 ********** 2025-06-03 15:51:00.753672 | orchestrator | 2025-06-03 15:51:00.753678 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2025-06-03 15:51:00.753685 | orchestrator | Tuesday 03 June 2025 15:49:24 +0000 (0:00:00.079) 0:01:27.941 ********** 2025-06-03 15:51:00.753695 | orchestrator | changed: [testbed-manager] 2025-06-03 15:51:00.753701 | orchestrator | 2025-06-03 15:51:00.753708 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2025-06-03 15:51:00.753714 | orchestrator | Tuesday 03 June 2025 15:49:42 +0000 (0:00:17.810) 0:01:45.752 ********** 2025-06-03 15:51:00.753721 | orchestrator | changed: [testbed-manager] 2025-06-03 15:51:00.753728 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:51:00.753734 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:51:00.753741 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:51:00.753747 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:51:00.753753 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:51:00.753760 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:51:00.753766 | orchestrator | 2025-06-03 15:51:00.753773 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2025-06-03 15:51:00.753779 | orchestrator | Tuesday 03 June 2025 15:49:49 +0000 (0:00:07.342) 0:01:53.095 ********** 2025-06-03 15:51:00.753785 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:51:00.753791 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:51:00.753798 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:51:00.753804 | orchestrator | 2025-06-03 15:51:00.753810 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2025-06-03 15:51:00.753817 | orchestrator | Tuesday 03 June 2025 15:50:00 +0000 (0:00:10.486) 0:02:03.581 ********** 2025-06-03 15:51:00.753823 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:51:00.753829 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:51:00.753835 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:51:00.753842 | orchestrator | 2025-06-03 15:51:00.753849 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2025-06-03 15:51:00.753855 | orchestrator | Tuesday 03 June 2025 15:50:10 +0000 (0:00:10.222) 0:02:13.803 ********** 2025-06-03 15:51:00.753862 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:51:00.753868 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:51:00.753874 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:51:00.753881 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:51:00.753887 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:51:00.753893 | orchestrator | changed: [testbed-manager] 2025-06-03 15:51:00.753900 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:51:00.753906 | orchestrator | 2025-06-03 15:51:00.753912 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2025-06-03 15:51:00.753919 | orchestrator | Tuesday 03 June 2025 15:50:24 +0000 (0:00:14.153) 0:02:27.957 ********** 2025-06-03 15:51:00.753925 | orchestrator | changed: [testbed-manager] 2025-06-03 15:51:00.753932 | orchestrator | 2025-06-03 15:51:00.753956 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2025-06-03 15:51:00.753967 | orchestrator | Tuesday 03 June 2025 15:50:32 +0000 (0:00:08.089) 0:02:36.047 ********** 2025-06-03 15:51:00.753978 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:51:00.753987 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:51:00.753994 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:51:00.754001 | orchestrator | 2025-06-03 15:51:00.754007 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2025-06-03 15:51:00.754043 | orchestrator | Tuesday 03 June 2025 15:50:39 +0000 (0:00:07.309) 0:02:43.357 ********** 2025-06-03 15:51:00.754050 | orchestrator | changed: [testbed-manager] 2025-06-03 15:51:00.754058 | orchestrator | 2025-06-03 15:51:00.754064 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2025-06-03 15:51:00.754071 | orchestrator | Tuesday 03 June 2025 15:50:46 +0000 (0:00:06.784) 0:02:50.141 ********** 2025-06-03 15:51:00.754083 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:51:00.754089 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:51:00.754096 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:51:00.754102 | orchestrator | 2025-06-03 15:51:00.754108 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-03 15:51:00.754115 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-06-03 15:51:00.754121 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-06-03 15:51:00.754128 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-06-03 15:51:00.754135 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-06-03 15:51:00.754141 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-06-03 15:51:00.754148 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-06-03 15:51:00.754154 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-06-03 15:51:00.754160 | orchestrator | 2025-06-03 15:51:00.754166 | orchestrator | 2025-06-03 15:51:00.754173 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-03 15:51:00.754179 | orchestrator | Tuesday 03 June 2025 15:50:59 +0000 (0:00:13.156) 0:03:03.298 ********** 2025-06-03 15:51:00.754185 | orchestrator | =============================================================================== 2025-06-03 15:51:00.754197 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 26.48s 2025-06-03 15:51:00.754204 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 17.81s 2025-06-03 15:51:00.754210 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 14.15s 2025-06-03 15:51:00.754222 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 13.47s 2025-06-03 15:51:00.754229 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container ------------- 13.16s 2025-06-03 15:51:00.754235 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container -------------- 10.49s 2025-06-03 15:51:00.754241 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ----------- 10.22s 2025-06-03 15:51:00.754248 | orchestrator | prometheus : Restart prometheus-alertmanager container ------------------ 8.09s 2025-06-03 15:51:00.754254 | orchestrator | prometheus : Restart prometheus-node-exporter container ----------------- 7.34s 2025-06-03 15:51:00.754261 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container -------- 7.31s 2025-06-03 15:51:00.754267 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 6.78s 2025-06-03 15:51:00.754273 | orchestrator | prometheus : Copying over config.json files ----------------------------- 6.46s 2025-06-03 15:51:00.754280 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 5.83s 2025-06-03 15:51:00.754286 | orchestrator | prometheus : Check prometheus containers -------------------------------- 4.09s 2025-06-03 15:51:00.754292 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 2.75s 2025-06-03 15:51:00.754299 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 2.75s 2025-06-03 15:51:00.754305 | orchestrator | prometheus : Copying cloud config file for openstack exporter ----------- 2.25s 2025-06-03 15:51:00.754311 | orchestrator | prometheus : Find prometheus host config overrides ---------------------- 2.12s 2025-06-03 15:51:00.754322 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS key --- 1.97s 2025-06-03 15:51:00.754329 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 1.92s 2025-06-03 15:51:03.772828 | orchestrator | 2025-06-03 15:51:03 | INFO  | Task d95376f6-aabb-48d0-8cff-676bc02c4743 is in state STARTED 2025-06-03 15:51:03.774388 | orchestrator | 2025-06-03 15:51:03 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:51:03.774448 | orchestrator | 2025-06-03 15:51:03 | INFO  | Task 98fbb740-60f5-495a-ae75-a50b82311151 is in state STARTED 2025-06-03 15:51:03.775006 | orchestrator | 2025-06-03 15:51:03 | INFO  | Task 4e136a26-1ab3-44f8-baba-4f2e5430b93c is in state STARTED 2025-06-03 15:51:03.775210 | orchestrator | 2025-06-03 15:51:03 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:51:06.808686 | orchestrator | 2025-06-03 15:51:06 | INFO  | Task d95376f6-aabb-48d0-8cff-676bc02c4743 is in state STARTED 2025-06-03 15:51:06.809051 | orchestrator | 2025-06-03 15:51:06 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:51:06.810135 | orchestrator | 2025-06-03 15:51:06 | INFO  | Task 98fbb740-60f5-495a-ae75-a50b82311151 is in state STARTED 2025-06-03 15:51:06.810634 | orchestrator | 2025-06-03 15:51:06 | INFO  | Task 4e136a26-1ab3-44f8-baba-4f2e5430b93c is in state STARTED 2025-06-03 15:51:06.810657 | orchestrator | 2025-06-03 15:51:06 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:51:09.836750 | orchestrator | 2025-06-03 15:51:09 | INFO  | Task d95376f6-aabb-48d0-8cff-676bc02c4743 is in state STARTED 2025-06-03 15:51:09.837502 | orchestrator | 2025-06-03 15:51:09 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:51:09.838403 | orchestrator | 2025-06-03 15:51:09 | INFO  | Task 98fbb740-60f5-495a-ae75-a50b82311151 is in state STARTED 2025-06-03 15:51:09.839142 | orchestrator | 2025-06-03 15:51:09 | INFO  | Task 4e136a26-1ab3-44f8-baba-4f2e5430b93c is in state STARTED 2025-06-03 15:51:09.840235 | orchestrator | 2025-06-03 15:51:09 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:51:12.870167 | orchestrator | 2025-06-03 15:51:12 | INFO  | Task d95376f6-aabb-48d0-8cff-676bc02c4743 is in state STARTED 2025-06-03 15:51:12.870469 | orchestrator | 2025-06-03 15:51:12 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:51:12.871149 | orchestrator | 2025-06-03 15:51:12 | INFO  | Task 98fbb740-60f5-495a-ae75-a50b82311151 is in state STARTED 2025-06-03 15:51:12.872062 | orchestrator | 2025-06-03 15:51:12 | INFO  | Task 4e136a26-1ab3-44f8-baba-4f2e5430b93c is in state STARTED 2025-06-03 15:51:12.872100 | orchestrator | 2025-06-03 15:51:12 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:51:15.893961 | orchestrator | 2025-06-03 15:51:15 | INFO  | Task d95376f6-aabb-48d0-8cff-676bc02c4743 is in state STARTED 2025-06-03 15:51:15.894226 | orchestrator | 2025-06-03 15:51:15 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:51:15.894643 | orchestrator | 2025-06-03 15:51:15 | INFO  | Task 98fbb740-60f5-495a-ae75-a50b82311151 is in state STARTED 2025-06-03 15:51:15.895399 | orchestrator | 2025-06-03 15:51:15 | INFO  | Task 4e136a26-1ab3-44f8-baba-4f2e5430b93c is in state STARTED 2025-06-03 15:51:15.895485 | orchestrator | 2025-06-03 15:51:15 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:51:18.919914 | orchestrator | 2025-06-03 15:51:18 | INFO  | Task d95376f6-aabb-48d0-8cff-676bc02c4743 is in state STARTED 2025-06-03 15:51:18.920880 | orchestrator | 2025-06-03 15:51:18 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:51:18.921368 | orchestrator | 2025-06-03 15:51:18 | INFO  | Task 98fbb740-60f5-495a-ae75-a50b82311151 is in state STARTED 2025-06-03 15:51:18.925101 | orchestrator | 2025-06-03 15:51:18 | INFO  | Task 4e136a26-1ab3-44f8-baba-4f2e5430b93c is in state STARTED 2025-06-03 15:51:18.925178 | orchestrator | 2025-06-03 15:51:18 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:51:21.953422 | orchestrator | 2025-06-03 15:51:21 | INFO  | Task d95376f6-aabb-48d0-8cff-676bc02c4743 is in state STARTED 2025-06-03 15:51:21.955052 | orchestrator | 2025-06-03 15:51:21 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:51:21.955760 | orchestrator | 2025-06-03 15:51:21 | INFO  | Task 98fbb740-60f5-495a-ae75-a50b82311151 is in state STARTED 2025-06-03 15:51:21.956319 | orchestrator | 2025-06-03 15:51:21 | INFO  | Task 4e136a26-1ab3-44f8-baba-4f2e5430b93c is in state STARTED 2025-06-03 15:51:21.956496 | orchestrator | 2025-06-03 15:51:21 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:51:24.991508 | orchestrator | 2025-06-03 15:51:24 | INFO  | Task d95376f6-aabb-48d0-8cff-676bc02c4743 is in state STARTED 2025-06-03 15:51:24.992200 | orchestrator | 2025-06-03 15:51:24 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:51:24.993137 | orchestrator | 2025-06-03 15:51:24 | INFO  | Task 98fbb740-60f5-495a-ae75-a50b82311151 is in state STARTED 2025-06-03 15:51:24.994232 | orchestrator | 2025-06-03 15:51:24 | INFO  | Task 4e136a26-1ab3-44f8-baba-4f2e5430b93c is in state STARTED 2025-06-03 15:51:24.994287 | orchestrator | 2025-06-03 15:51:24 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:51:28.022107 | orchestrator | 2025-06-03 15:51:28 | INFO  | Task d95376f6-aabb-48d0-8cff-676bc02c4743 is in state STARTED 2025-06-03 15:51:28.024972 | orchestrator | 2025-06-03 15:51:28 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:51:28.028164 | orchestrator | 2025-06-03 15:51:28 | INFO  | Task 98fbb740-60f5-495a-ae75-a50b82311151 is in state STARTED 2025-06-03 15:51:28.029067 | orchestrator | 2025-06-03 15:51:28 | INFO  | Task 4e136a26-1ab3-44f8-baba-4f2e5430b93c is in state STARTED 2025-06-03 15:51:28.029167 | orchestrator | 2025-06-03 15:51:28 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:51:31.073724 | orchestrator | 2025-06-03 15:51:31 | INFO  | Task d95376f6-aabb-48d0-8cff-676bc02c4743 is in state STARTED 2025-06-03 15:51:31.074559 | orchestrator | 2025-06-03 15:51:31 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:51:31.075982 | orchestrator | 2025-06-03 15:51:31 | INFO  | Task 98fbb740-60f5-495a-ae75-a50b82311151 is in state STARTED 2025-06-03 15:51:31.077371 | orchestrator | 2025-06-03 15:51:31 | INFO  | Task 4e136a26-1ab3-44f8-baba-4f2e5430b93c is in state STARTED 2025-06-03 15:51:31.077439 | orchestrator | 2025-06-03 15:51:31 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:51:34.120789 | orchestrator | 2025-06-03 15:51:34 | INFO  | Task d95376f6-aabb-48d0-8cff-676bc02c4743 is in state STARTED 2025-06-03 15:51:34.122274 | orchestrator | 2025-06-03 15:51:34 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:51:34.123873 | orchestrator | 2025-06-03 15:51:34 | INFO  | Task 98fbb740-60f5-495a-ae75-a50b82311151 is in state STARTED 2025-06-03 15:51:34.125623 | orchestrator | 2025-06-03 15:51:34 | INFO  | Task 4e136a26-1ab3-44f8-baba-4f2e5430b93c is in state STARTED 2025-06-03 15:51:34.125690 | orchestrator | 2025-06-03 15:51:34 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:51:37.160129 | orchestrator | 2025-06-03 15:51:37 | INFO  | Task d95376f6-aabb-48d0-8cff-676bc02c4743 is in state STARTED 2025-06-03 15:51:37.162716 | orchestrator | 2025-06-03 15:51:37 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:51:37.163991 | orchestrator | 2025-06-03 15:51:37 | INFO  | Task 98fbb740-60f5-495a-ae75-a50b82311151 is in state STARTED 2025-06-03 15:51:37.165412 | orchestrator | 2025-06-03 15:51:37 | INFO  | Task 4e136a26-1ab3-44f8-baba-4f2e5430b93c is in state STARTED 2025-06-03 15:51:37.165729 | orchestrator | 2025-06-03 15:51:37 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:51:40.220143 | orchestrator | 2025-06-03 15:51:40 | INFO  | Task d95376f6-aabb-48d0-8cff-676bc02c4743 is in state STARTED 2025-06-03 15:51:40.222330 | orchestrator | 2025-06-03 15:51:40 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:51:40.223814 | orchestrator | 2025-06-03 15:51:40 | INFO  | Task 98fbb740-60f5-495a-ae75-a50b82311151 is in state STARTED 2025-06-03 15:51:40.225368 | orchestrator | 2025-06-03 15:51:40 | INFO  | Task 4e136a26-1ab3-44f8-baba-4f2e5430b93c is in state STARTED 2025-06-03 15:51:40.225415 | orchestrator | 2025-06-03 15:51:40 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:51:43.269869 | orchestrator | 2025-06-03 15:51:43 | INFO  | Task d95376f6-aabb-48d0-8cff-676bc02c4743 is in state STARTED 2025-06-03 15:51:43.272241 | orchestrator | 2025-06-03 15:51:43 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:51:43.275154 | orchestrator | 2025-06-03 15:51:43 | INFO  | Task 98fbb740-60f5-495a-ae75-a50b82311151 is in state STARTED 2025-06-03 15:51:43.276907 | orchestrator | 2025-06-03 15:51:43 | INFO  | Task 4e136a26-1ab3-44f8-baba-4f2e5430b93c is in state STARTED 2025-06-03 15:51:43.276956 | orchestrator | 2025-06-03 15:51:43 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:51:46.330807 | orchestrator | 2025-06-03 15:51:46 | INFO  | Task d95376f6-aabb-48d0-8cff-676bc02c4743 is in state STARTED 2025-06-03 15:51:46.332592 | orchestrator | 2025-06-03 15:51:46 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:51:46.334616 | orchestrator | 2025-06-03 15:51:46 | INFO  | Task 98fbb740-60f5-495a-ae75-a50b82311151 is in state STARTED 2025-06-03 15:51:46.336171 | orchestrator | 2025-06-03 15:51:46 | INFO  | Task 4e136a26-1ab3-44f8-baba-4f2e5430b93c is in state STARTED 2025-06-03 15:51:46.336199 | orchestrator | 2025-06-03 15:51:46 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:51:49.375222 | orchestrator | 2025-06-03 15:51:49 | INFO  | Task d95376f6-aabb-48d0-8cff-676bc02c4743 is in state STARTED 2025-06-03 15:51:49.375936 | orchestrator | 2025-06-03 15:51:49 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:51:49.376867 | orchestrator | 2025-06-03 15:51:49 | INFO  | Task 98fbb740-60f5-495a-ae75-a50b82311151 is in state STARTED 2025-06-03 15:51:49.377572 | orchestrator | 2025-06-03 15:51:49 | INFO  | Task 4e136a26-1ab3-44f8-baba-4f2e5430b93c is in state STARTED 2025-06-03 15:51:49.377699 | orchestrator | 2025-06-03 15:51:49 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:51:52.423723 | orchestrator | 2025-06-03 15:51:52 | INFO  | Task d95376f6-aabb-48d0-8cff-676bc02c4743 is in state STARTED 2025-06-03 15:51:52.425265 | orchestrator | 2025-06-03 15:51:52 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:51:52.428503 | orchestrator | 2025-06-03 15:51:52 | INFO  | Task 98fbb740-60f5-495a-ae75-a50b82311151 is in state STARTED 2025-06-03 15:51:52.430690 | orchestrator | 2025-06-03 15:51:52 | INFO  | Task 4e136a26-1ab3-44f8-baba-4f2e5430b93c is in state STARTED 2025-06-03 15:51:52.430855 | orchestrator | 2025-06-03 15:51:52 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:51:55.476809 | orchestrator | 2025-06-03 15:51:55 | INFO  | Task d95376f6-aabb-48d0-8cff-676bc02c4743 is in state STARTED 2025-06-03 15:51:55.480960 | orchestrator | 2025-06-03 15:51:55 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:51:55.485273 | orchestrator | 2025-06-03 15:51:55 | INFO  | Task 98fbb740-60f5-495a-ae75-a50b82311151 is in state STARTED 2025-06-03 15:51:55.487024 | orchestrator | 2025-06-03 15:51:55 | INFO  | Task 4e136a26-1ab3-44f8-baba-4f2e5430b93c is in state STARTED 2025-06-03 15:51:55.487265 | orchestrator | 2025-06-03 15:51:55 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:51:58.543634 | orchestrator | 2025-06-03 15:51:58 | INFO  | Task d95376f6-aabb-48d0-8cff-676bc02c4743 is in state STARTED 2025-06-03 15:51:58.545712 | orchestrator | 2025-06-03 15:51:58 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:51:58.548327 | orchestrator | 2025-06-03 15:51:58 | INFO  | Task 98fbb740-60f5-495a-ae75-a50b82311151 is in state STARTED 2025-06-03 15:51:58.550601 | orchestrator | 2025-06-03 15:51:58 | INFO  | Task 4e136a26-1ab3-44f8-baba-4f2e5430b93c is in state STARTED 2025-06-03 15:51:58.550676 | orchestrator | 2025-06-03 15:51:58 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:52:01.587809 | orchestrator | 2025-06-03 15:52:01 | INFO  | Task d95376f6-aabb-48d0-8cff-676bc02c4743 is in state STARTED 2025-06-03 15:52:01.588153 | orchestrator | 2025-06-03 15:52:01 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:52:01.589740 | orchestrator | 2025-06-03 15:52:01 | INFO  | Task 98fbb740-60f5-495a-ae75-a50b82311151 is in state STARTED 2025-06-03 15:52:01.592209 | orchestrator | 2025-06-03 15:52:01 | INFO  | Task 4e136a26-1ab3-44f8-baba-4f2e5430b93c is in state STARTED 2025-06-03 15:52:01.592277 | orchestrator | 2025-06-03 15:52:01 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:52:04.637053 | orchestrator | 2025-06-03 15:52:04 | INFO  | Task d95376f6-aabb-48d0-8cff-676bc02c4743 is in state STARTED 2025-06-03 15:52:04.637648 | orchestrator | 2025-06-03 15:52:04 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:52:04.638900 | orchestrator | 2025-06-03 15:52:04 | INFO  | Task 98fbb740-60f5-495a-ae75-a50b82311151 is in state STARTED 2025-06-03 15:52:04.640088 | orchestrator | 2025-06-03 15:52:04 | INFO  | Task 4e136a26-1ab3-44f8-baba-4f2e5430b93c is in state STARTED 2025-06-03 15:52:04.640113 | orchestrator | 2025-06-03 15:52:04 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:52:07.681496 | orchestrator | 2025-06-03 15:52:07 | INFO  | Task d95376f6-aabb-48d0-8cff-676bc02c4743 is in state STARTED 2025-06-03 15:52:07.683262 | orchestrator | 2025-06-03 15:52:07 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:52:07.684494 | orchestrator | 2025-06-03 15:52:07 | INFO  | Task 98fbb740-60f5-495a-ae75-a50b82311151 is in state STARTED 2025-06-03 15:52:07.686449 | orchestrator | 2025-06-03 15:52:07 | INFO  | Task 4e136a26-1ab3-44f8-baba-4f2e5430b93c is in state STARTED 2025-06-03 15:52:07.686493 | orchestrator | 2025-06-03 15:52:07 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:52:10.735085 | orchestrator | 2025-06-03 15:52:10 | INFO  | Task d95376f6-aabb-48d0-8cff-676bc02c4743 is in state STARTED 2025-06-03 15:52:10.735188 | orchestrator | 2025-06-03 15:52:10 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:52:10.735882 | orchestrator | 2025-06-03 15:52:10 | INFO  | Task 98fbb740-60f5-495a-ae75-a50b82311151 is in state STARTED 2025-06-03 15:52:10.737371 | orchestrator | 2025-06-03 15:52:10 | INFO  | Task 4e136a26-1ab3-44f8-baba-4f2e5430b93c is in state STARTED 2025-06-03 15:52:10.737445 | orchestrator | 2025-06-03 15:52:10 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:52:13.790228 | orchestrator | 2025-06-03 15:52:13 | INFO  | Task d95376f6-aabb-48d0-8cff-676bc02c4743 is in state STARTED 2025-06-03 15:52:13.791040 | orchestrator | 2025-06-03 15:52:13 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:52:13.795513 | orchestrator | 2025-06-03 15:52:13 | INFO  | Task 98fbb740-60f5-495a-ae75-a50b82311151 is in state STARTED 2025-06-03 15:52:13.795595 | orchestrator | 2025-06-03 15:52:13 | INFO  | Task 4e136a26-1ab3-44f8-baba-4f2e5430b93c is in state STARTED 2025-06-03 15:52:13.795719 | orchestrator | 2025-06-03 15:52:13 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:52:16.850579 | orchestrator | 2025-06-03 15:52:16 | INFO  | Task d95376f6-aabb-48d0-8cff-676bc02c4743 is in state STARTED 2025-06-03 15:52:16.852224 | orchestrator | 2025-06-03 15:52:16 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:52:16.854275 | orchestrator | 2025-06-03 15:52:16 | INFO  | Task 98fbb740-60f5-495a-ae75-a50b82311151 is in state STARTED 2025-06-03 15:52:16.855829 | orchestrator | 2025-06-03 15:52:16 | INFO  | Task 4e136a26-1ab3-44f8-baba-4f2e5430b93c is in state STARTED 2025-06-03 15:52:16.856002 | orchestrator | 2025-06-03 15:52:16 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:52:19.898189 | orchestrator | 2025-06-03 15:52:19 | INFO  | Task d95376f6-aabb-48d0-8cff-676bc02c4743 is in state STARTED 2025-06-03 15:52:19.902156 | orchestrator | 2025-06-03 15:52:19 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:52:19.902319 | orchestrator | 2025-06-03 15:52:19 | INFO  | Task 98fbb740-60f5-495a-ae75-a50b82311151 is in state STARTED 2025-06-03 15:52:19.903998 | orchestrator | 2025-06-03 15:52:19 | INFO  | Task 4e136a26-1ab3-44f8-baba-4f2e5430b93c is in state STARTED 2025-06-03 15:52:19.904041 | orchestrator | 2025-06-03 15:52:19 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:52:22.940107 | orchestrator | 2025-06-03 15:52:22 | INFO  | Task d95376f6-aabb-48d0-8cff-676bc02c4743 is in state STARTED 2025-06-03 15:52:22.945837 | orchestrator | 2025-06-03 15:52:22 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:52:22.947360 | orchestrator | 2025-06-03 15:52:22 | INFO  | Task 98fbb740-60f5-495a-ae75-a50b82311151 is in state STARTED 2025-06-03 15:52:22.949675 | orchestrator | 2025-06-03 15:52:22 | INFO  | Task 4e136a26-1ab3-44f8-baba-4f2e5430b93c is in state STARTED 2025-06-03 15:52:22.949734 | orchestrator | 2025-06-03 15:52:22 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:52:25.995565 | orchestrator | 2025-06-03 15:52:25 | INFO  | Task d95376f6-aabb-48d0-8cff-676bc02c4743 is in state STARTED 2025-06-03 15:52:25.996446 | orchestrator | 2025-06-03 15:52:25 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:52:25.997608 | orchestrator | 2025-06-03 15:52:25 | INFO  | Task 98fbb740-60f5-495a-ae75-a50b82311151 is in state STARTED 2025-06-03 15:52:25.999002 | orchestrator | 2025-06-03 15:52:25 | INFO  | Task 4e136a26-1ab3-44f8-baba-4f2e5430b93c is in state STARTED 2025-06-03 15:52:25.999090 | orchestrator | 2025-06-03 15:52:25 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:52:29.029351 | orchestrator | 2025-06-03 15:52:29 | INFO  | Task d95376f6-aabb-48d0-8cff-676bc02c4743 is in state STARTED 2025-06-03 15:52:29.031186 | orchestrator | 2025-06-03 15:52:29 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:52:29.033061 | orchestrator | 2025-06-03 15:52:29 | INFO  | Task 98fbb740-60f5-495a-ae75-a50b82311151 is in state STARTED 2025-06-03 15:52:29.035283 | orchestrator | 2025-06-03 15:52:29 | INFO  | Task 4e136a26-1ab3-44f8-baba-4f2e5430b93c is in state STARTED 2025-06-03 15:52:29.035353 | orchestrator | 2025-06-03 15:52:29 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:52:32.073574 | orchestrator | 2025-06-03 15:52:32 | INFO  | Task d95376f6-aabb-48d0-8cff-676bc02c4743 is in state STARTED 2025-06-03 15:52:32.075092 | orchestrator | 2025-06-03 15:52:32 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:52:32.078173 | orchestrator | 2025-06-03 15:52:32 | INFO  | Task 98fbb740-60f5-495a-ae75-a50b82311151 is in state STARTED 2025-06-03 15:52:32.080504 | orchestrator | 2025-06-03 15:52:32 | INFO  | Task 4e136a26-1ab3-44f8-baba-4f2e5430b93c is in state STARTED 2025-06-03 15:52:32.080566 | orchestrator | 2025-06-03 15:52:32 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:52:35.112519 | orchestrator | 2025-06-03 15:52:35 | INFO  | Task d95376f6-aabb-48d0-8cff-676bc02c4743 is in state SUCCESS 2025-06-03 15:52:35.113964 | orchestrator | 2025-06-03 15:52:35.114067 | orchestrator | 2025-06-03 15:52:35.114079 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-03 15:52:35.114087 | orchestrator | 2025-06-03 15:52:35.114094 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-03 15:52:35.114101 | orchestrator | Tuesday 03 June 2025 15:49:52 +0000 (0:00:00.237) 0:00:00.237 ********** 2025-06-03 15:52:35.114107 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:52:35.114116 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:52:35.114123 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:52:35.114130 | orchestrator | 2025-06-03 15:52:35.114137 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-03 15:52:35.114144 | orchestrator | Tuesday 03 June 2025 15:49:52 +0000 (0:00:00.303) 0:00:00.541 ********** 2025-06-03 15:52:35.114150 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2025-06-03 15:52:35.114157 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2025-06-03 15:52:35.114164 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2025-06-03 15:52:35.114170 | orchestrator | 2025-06-03 15:52:35.114177 | orchestrator | PLAY [Apply role glance] ******************************************************* 2025-06-03 15:52:35.114183 | orchestrator | 2025-06-03 15:52:35.114189 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-06-03 15:52:35.114196 | orchestrator | Tuesday 03 June 2025 15:49:52 +0000 (0:00:00.419) 0:00:00.960 ********** 2025-06-03 15:52:35.114202 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:52:35.114210 | orchestrator | 2025-06-03 15:52:35.114218 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2025-06-03 15:52:35.114224 | orchestrator | Tuesday 03 June 2025 15:49:53 +0000 (0:00:00.490) 0:00:01.451 ********** 2025-06-03 15:52:35.114230 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2025-06-03 15:52:35.114237 | orchestrator | 2025-06-03 15:52:35.114243 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2025-06-03 15:52:35.114249 | orchestrator | Tuesday 03 June 2025 15:49:56 +0000 (0:00:03.356) 0:00:04.808 ********** 2025-06-03 15:52:35.114256 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2025-06-03 15:52:35.114264 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2025-06-03 15:52:35.114270 | orchestrator | 2025-06-03 15:52:35.114276 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2025-06-03 15:52:35.114389 | orchestrator | Tuesday 03 June 2025 15:50:03 +0000 (0:00:06.595) 0:00:11.403 ********** 2025-06-03 15:52:35.114398 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-03 15:52:35.114405 | orchestrator | 2025-06-03 15:52:35.114412 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2025-06-03 15:52:35.114418 | orchestrator | Tuesday 03 June 2025 15:50:06 +0000 (0:00:03.305) 0:00:14.708 ********** 2025-06-03 15:52:35.114424 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-03 15:52:35.114431 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2025-06-03 15:52:35.114438 | orchestrator | 2025-06-03 15:52:35.114444 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2025-06-03 15:52:35.114451 | orchestrator | Tuesday 03 June 2025 15:50:10 +0000 (0:00:03.846) 0:00:18.555 ********** 2025-06-03 15:52:35.114457 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-03 15:52:35.114463 | orchestrator | 2025-06-03 15:52:35.114469 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2025-06-03 15:52:35.114476 | orchestrator | Tuesday 03 June 2025 15:50:13 +0000 (0:00:03.300) 0:00:21.855 ********** 2025-06-03 15:52:35.114482 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2025-06-03 15:52:35.114489 | orchestrator | 2025-06-03 15:52:35.114494 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2025-06-03 15:52:35.114500 | orchestrator | Tuesday 03 June 2025 15:50:17 +0000 (0:00:04.269) 0:00:26.125 ********** 2025-06-03 15:52:35.114529 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-03 15:52:35.114543 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-03 15:52:35.114558 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-03 15:52:35.114565 | orchestrator | 2025-06-03 15:52:35.114571 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-06-03 15:52:35.114577 | orchestrator | Tuesday 03 June 2025 15:50:21 +0000 (0:00:03.554) 0:00:29.679 ********** 2025-06-03 15:52:35.114584 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:52:35.114591 | orchestrator | 2025-06-03 15:52:35.114601 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2025-06-03 15:52:35.114608 | orchestrator | Tuesday 03 June 2025 15:50:22 +0000 (0:00:00.650) 0:00:30.330 ********** 2025-06-03 15:52:35.114614 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:52:35.114621 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:52:35.114627 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:52:35.114634 | orchestrator | 2025-06-03 15:52:35.114640 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2025-06-03 15:52:35.114647 | orchestrator | Tuesday 03 June 2025 15:50:25 +0000 (0:00:03.496) 0:00:33.827 ********** 2025-06-03 15:52:35.114653 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-06-03 15:52:35.114660 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-06-03 15:52:35.114672 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-06-03 15:52:35.114679 | orchestrator | 2025-06-03 15:52:35.114686 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2025-06-03 15:52:35.114693 | orchestrator | Tuesday 03 June 2025 15:50:27 +0000 (0:00:01.586) 0:00:35.413 ********** 2025-06-03 15:52:35.114700 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-06-03 15:52:35.114707 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-06-03 15:52:35.114714 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-06-03 15:52:35.114720 | orchestrator | 2025-06-03 15:52:35.114727 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2025-06-03 15:52:35.114733 | orchestrator | Tuesday 03 June 2025 15:50:28 +0000 (0:00:01.120) 0:00:36.534 ********** 2025-06-03 15:52:35.114740 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:52:35.114812 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:52:35.114822 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:52:35.114829 | orchestrator | 2025-06-03 15:52:35.114884 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2025-06-03 15:52:35.114895 | orchestrator | Tuesday 03 June 2025 15:50:29 +0000 (0:00:00.848) 0:00:37.382 ********** 2025-06-03 15:52:35.114902 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:52:35.114909 | orchestrator | 2025-06-03 15:52:35.114920 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2025-06-03 15:52:35.114927 | orchestrator | Tuesday 03 June 2025 15:50:29 +0000 (0:00:00.154) 0:00:37.536 ********** 2025-06-03 15:52:35.114934 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:52:35.114941 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:52:35.114948 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:52:35.114954 | orchestrator | 2025-06-03 15:52:35.114961 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-06-03 15:52:35.114967 | orchestrator | Tuesday 03 June 2025 15:50:29 +0000 (0:00:00.372) 0:00:37.909 ********** 2025-06-03 15:52:35.114974 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:52:35.114980 | orchestrator | 2025-06-03 15:52:35.114987 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2025-06-03 15:52:35.114994 | orchestrator | Tuesday 03 June 2025 15:50:30 +0000 (0:00:00.528) 0:00:38.437 ********** 2025-06-03 15:52:35.115008 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-03 15:52:35.115028 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-03 15:52:35.115036 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-03 15:52:35.115043 | orchestrator | 2025-06-03 15:52:35.115049 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2025-06-03 15:52:35.115056 | orchestrator | Tuesday 03 June 2025 15:50:36 +0000 (0:00:06.540) 0:00:44.977 ********** 2025-06-03 15:52:35.115069 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-03 15:52:35.115082 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:52:35.115093 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-03 15:52:35.115100 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:52:35.115113 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-03 15:52:35.115129 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:52:35.115136 | orchestrator | 2025-06-03 15:52:35.115143 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2025-06-03 15:52:35.115150 | orchestrator | Tuesday 03 June 2025 15:50:39 +0000 (0:00:02.709) 0:00:47.687 ********** 2025-06-03 15:52:35.115160 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-03 15:52:35.115168 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:52:35.115180 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-03 15:52:35.115193 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:52:35.115203 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-03 15:52:35.115211 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:52:35.115218 | orchestrator | 2025-06-03 15:52:35.115224 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2025-06-03 15:52:35.115230 | orchestrator | Tuesday 03 June 2025 15:50:42 +0000 (0:00:03.448) 0:00:51.135 ********** 2025-06-03 15:52:35.115237 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:52:35.115244 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:52:35.115251 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:52:35.115258 | orchestrator | 2025-06-03 15:52:35.115264 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2025-06-03 15:52:35.115271 | orchestrator | Tuesday 03 June 2025 15:50:48 +0000 (0:00:05.856) 0:00:56.992 ********** 2025-06-03 15:52:35.115282 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-03 15:52:35.115302 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-03 15:52:35.115310 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-03 15:52:35.115336 | orchestrator | 2025-06-03 15:52:35.115343 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2025-06-03 15:52:35.115350 | orchestrator | Tuesday 03 June 2025 15:50:53 +0000 (0:00:04.538) 0:01:01.530 ********** 2025-06-03 15:52:35.115357 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:52:35.115364 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:52:35.115371 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:52:35.115378 | orchestrator | 2025-06-03 15:52:35.115385 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2025-06-03 15:52:35.115391 | orchestrator | Tuesday 03 June 2025 15:51:00 +0000 (0:00:07.290) 0:01:08.821 ********** 2025-06-03 15:52:35.115398 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:52:35.115405 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:52:35.115412 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:52:35.115419 | orchestrator | 2025-06-03 15:52:35.115426 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2025-06-03 15:52:35.115436 | orchestrator | Tuesday 03 June 2025 15:51:05 +0000 (0:00:04.437) 0:01:13.258 ********** 2025-06-03 15:52:35.115443 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:52:35.115449 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:52:35.115455 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:52:35.115462 | orchestrator | 2025-06-03 15:52:35.115468 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2025-06-03 15:52:35.115475 | orchestrator | Tuesday 03 June 2025 15:51:09 +0000 (0:00:04.949) 0:01:18.208 ********** 2025-06-03 15:52:35.115481 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:52:35.115488 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:52:35.115494 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:52:35.115500 | orchestrator | 2025-06-03 15:52:35.115506 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2025-06-03 15:52:35.115512 | orchestrator | Tuesday 03 June 2025 15:51:15 +0000 (0:00:05.626) 0:01:23.835 ********** 2025-06-03 15:52:35.115518 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:52:35.115524 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:52:35.115531 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:52:35.115538 | orchestrator | 2025-06-03 15:52:35.115545 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2025-06-03 15:52:35.115552 | orchestrator | Tuesday 03 June 2025 15:51:19 +0000 (0:00:03.914) 0:01:27.749 ********** 2025-06-03 15:52:35.115559 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:52:35.115566 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:52:35.115572 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:52:35.115579 | orchestrator | 2025-06-03 15:52:35.115586 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2025-06-03 15:52:35.115592 | orchestrator | Tuesday 03 June 2025 15:51:19 +0000 (0:00:00.286) 0:01:28.036 ********** 2025-06-03 15:52:35.115599 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-06-03 15:52:35.115606 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:52:35.115613 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-06-03 15:52:35.115619 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:52:35.115626 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-06-03 15:52:35.115633 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:52:35.115645 | orchestrator | 2025-06-03 15:52:35.115652 | orchestrator | TASK [glance : Check glance containers] **************************************** 2025-06-03 15:52:35.115662 | orchestrator | Tuesday 03 June 2025 15:51:23 +0000 (0:00:03.347) 0:01:31.383 ********** 2025-06-03 15:52:35.115670 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-03 15:52:35.115684 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-03 15:52:35.115696 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-03 15:52:35.115709 | orchestrator | 2025-06-03 15:52:35.115716 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-06-03 15:52:35.115723 | orchestrator | Tuesday 03 June 2025 15:51:26 +0000 (0:00:03.650) 0:01:35.033 ********** 2025-06-03 15:52:35.115730 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:52:35.115737 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:52:35.115744 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:52:35.115751 | orchestrator | 2025-06-03 15:52:35.115757 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2025-06-03 15:52:35.115763 | orchestrator | Tuesday 03 June 2025 15:51:27 +0000 (0:00:00.284) 0:01:35.318 ********** 2025-06-03 15:52:35.115769 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:52:35.115775 | orchestrator | 2025-06-03 15:52:35.115781 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2025-06-03 15:52:35.115788 | orchestrator | Tuesday 03 June 2025 15:51:29 +0000 (0:00:02.178) 0:01:37.496 ********** 2025-06-03 15:52:35.115793 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:52:35.115799 | orchestrator | 2025-06-03 15:52:35.115806 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2025-06-03 15:52:35.115812 | orchestrator | Tuesday 03 June 2025 15:51:31 +0000 (0:00:02.070) 0:01:39.566 ********** 2025-06-03 15:52:35.115819 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:52:35.115825 | orchestrator | 2025-06-03 15:52:35.115832 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2025-06-03 15:52:35.115860 | orchestrator | Tuesday 03 June 2025 15:51:33 +0000 (0:00:02.102) 0:01:41.669 ********** 2025-06-03 15:52:35.115866 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:52:35.115872 | orchestrator | 2025-06-03 15:52:35.115877 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2025-06-03 15:52:35.115884 | orchestrator | Tuesday 03 June 2025 15:52:04 +0000 (0:00:30.697) 0:02:12.366 ********** 2025-06-03 15:52:35.115890 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:52:35.115896 | orchestrator | 2025-06-03 15:52:35.115908 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-06-03 15:52:35.115916 | orchestrator | Tuesday 03 June 2025 15:52:06 +0000 (0:00:02.701) 0:02:15.068 ********** 2025-06-03 15:52:35.115935 | orchestrator | 2025-06-03 15:52:35.115942 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-06-03 15:52:35.115948 | orchestrator | Tuesday 03 June 2025 15:52:06 +0000 (0:00:00.079) 0:02:15.148 ********** 2025-06-03 15:52:35.115954 | orchestrator | 2025-06-03 15:52:35.115960 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-06-03 15:52:35.115967 | orchestrator | Tuesday 03 June 2025 15:52:06 +0000 (0:00:00.064) 0:02:15.213 ********** 2025-06-03 15:52:35.115979 | orchestrator | 2025-06-03 15:52:35.115985 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2025-06-03 15:52:35.115991 | orchestrator | Tuesday 03 June 2025 15:52:07 +0000 (0:00:00.082) 0:02:15.295 ********** 2025-06-03 15:52:35.115997 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:52:35.116004 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:52:35.116010 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:52:35.116017 | orchestrator | 2025-06-03 15:52:35.116024 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-03 15:52:35.116031 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-06-03 15:52:35.116040 | orchestrator | testbed-node-1 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-06-03 15:52:35.116046 | orchestrator | testbed-node-2 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-06-03 15:52:35.116053 | orchestrator | 2025-06-03 15:52:35.116060 | orchestrator | 2025-06-03 15:52:35.116067 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-03 15:52:35.116073 | orchestrator | Tuesday 03 June 2025 15:52:33 +0000 (0:00:26.607) 0:02:41.903 ********** 2025-06-03 15:52:35.116080 | orchestrator | =============================================================================== 2025-06-03 15:52:35.116086 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 30.70s 2025-06-03 15:52:35.116093 | orchestrator | glance : Restart glance-api container ---------------------------------- 26.61s 2025-06-03 15:52:35.116104 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 7.29s 2025-06-03 15:52:35.116111 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 6.60s 2025-06-03 15:52:35.116118 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 6.54s 2025-06-03 15:52:35.116124 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 5.86s 2025-06-03 15:52:35.116131 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 5.63s 2025-06-03 15:52:35.116138 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 4.95s 2025-06-03 15:52:35.116144 | orchestrator | glance : Copying over config.json files for services -------------------- 4.54s 2025-06-03 15:52:35.116150 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 4.44s 2025-06-03 15:52:35.116156 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 4.27s 2025-06-03 15:52:35.116162 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 3.91s 2025-06-03 15:52:35.116167 | orchestrator | service-ks-register : glance | Creating users --------------------------- 3.85s 2025-06-03 15:52:35.116173 | orchestrator | glance : Check glance containers ---------------------------------------- 3.65s 2025-06-03 15:52:35.116179 | orchestrator | glance : Ensuring config directories exist ------------------------------ 3.55s 2025-06-03 15:52:35.116184 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 3.50s 2025-06-03 15:52:35.116190 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 3.45s 2025-06-03 15:52:35.116196 | orchestrator | service-ks-register : glance | Creating services ------------------------ 3.36s 2025-06-03 15:52:35.116202 | orchestrator | glance : Copying over glance-haproxy-tls.cfg ---------------------------- 3.35s 2025-06-03 15:52:35.116209 | orchestrator | service-ks-register : glance | Creating projects ------------------------ 3.31s 2025-06-03 15:52:35.116361 | orchestrator | 2025-06-03 15:52:35 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:52:35.116374 | orchestrator | 2025-06-03 15:52:35 | INFO  | Task 98fbb740-60f5-495a-ae75-a50b82311151 is in state STARTED 2025-06-03 15:52:35.116389 | orchestrator | 2025-06-03 15:52:35 | INFO  | Task 4e136a26-1ab3-44f8-baba-4f2e5430b93c is in state STARTED 2025-06-03 15:52:35.116396 | orchestrator | 2025-06-03 15:52:35 | INFO  | Task 253602b5-2607-4111-b76e-2b088185d6ae is in state STARTED 2025-06-03 15:52:35.116403 | orchestrator | 2025-06-03 15:52:35 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:52:38.146458 | orchestrator | 2025-06-03 15:52:38 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:52:38.148069 | orchestrator | 2025-06-03 15:52:38 | INFO  | Task 98fbb740-60f5-495a-ae75-a50b82311151 is in state STARTED 2025-06-03 15:52:38.148395 | orchestrator | 2025-06-03 15:52:38 | INFO  | Task 4e136a26-1ab3-44f8-baba-4f2e5430b93c is in state STARTED 2025-06-03 15:52:38.149463 | orchestrator | 2025-06-03 15:52:38 | INFO  | Task 253602b5-2607-4111-b76e-2b088185d6ae is in state STARTED 2025-06-03 15:52:38.149496 | orchestrator | 2025-06-03 15:52:38 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:52:41.201225 | orchestrator | 2025-06-03 15:52:41 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:52:41.205494 | orchestrator | 2025-06-03 15:52:41 | INFO  | Task 98fbb740-60f5-495a-ae75-a50b82311151 is in state STARTED 2025-06-03 15:52:41.209485 | orchestrator | 2025-06-03 15:52:41 | INFO  | Task 4e136a26-1ab3-44f8-baba-4f2e5430b93c is in state STARTED 2025-06-03 15:52:41.211746 | orchestrator | 2025-06-03 15:52:41 | INFO  | Task 253602b5-2607-4111-b76e-2b088185d6ae is in state STARTED 2025-06-03 15:52:41.211818 | orchestrator | 2025-06-03 15:52:41 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:52:44.250903 | orchestrator | 2025-06-03 15:52:44 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:52:44.251414 | orchestrator | 2025-06-03 15:52:44 | INFO  | Task 98fbb740-60f5-495a-ae75-a50b82311151 is in state STARTED 2025-06-03 15:52:44.252384 | orchestrator | 2025-06-03 15:52:44 | INFO  | Task 4e136a26-1ab3-44f8-baba-4f2e5430b93c is in state STARTED 2025-06-03 15:52:44.253482 | orchestrator | 2025-06-03 15:52:44 | INFO  | Task 253602b5-2607-4111-b76e-2b088185d6ae is in state STARTED 2025-06-03 15:52:44.253520 | orchestrator | 2025-06-03 15:52:44 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:52:47.287028 | orchestrator | 2025-06-03 15:52:47 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:52:47.288081 | orchestrator | 2025-06-03 15:52:47 | INFO  | Task 98fbb740-60f5-495a-ae75-a50b82311151 is in state STARTED 2025-06-03 15:52:47.289615 | orchestrator | 2025-06-03 15:52:47 | INFO  | Task 4e136a26-1ab3-44f8-baba-4f2e5430b93c is in state STARTED 2025-06-03 15:52:47.291042 | orchestrator | 2025-06-03 15:52:47 | INFO  | Task 253602b5-2607-4111-b76e-2b088185d6ae is in state STARTED 2025-06-03 15:52:47.291085 | orchestrator | 2025-06-03 15:52:47 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:52:50.336547 | orchestrator | 2025-06-03 15:52:50 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:52:50.337043 | orchestrator | 2025-06-03 15:52:50 | INFO  | Task 98fbb740-60f5-495a-ae75-a50b82311151 is in state STARTED 2025-06-03 15:52:50.338077 | orchestrator | 2025-06-03 15:52:50 | INFO  | Task 4e136a26-1ab3-44f8-baba-4f2e5430b93c is in state STARTED 2025-06-03 15:52:50.338982 | orchestrator | 2025-06-03 15:52:50 | INFO  | Task 253602b5-2607-4111-b76e-2b088185d6ae is in state STARTED 2025-06-03 15:52:50.339022 | orchestrator | 2025-06-03 15:52:50 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:52:53.376567 | orchestrator | 2025-06-03 15:52:53 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:52:53.380635 | orchestrator | 2025-06-03 15:52:53 | INFO  | Task 98fbb740-60f5-495a-ae75-a50b82311151 is in state STARTED 2025-06-03 15:52:53.380769 | orchestrator | 2025-06-03 15:52:53 | INFO  | Task 4e136a26-1ab3-44f8-baba-4f2e5430b93c is in state STARTED 2025-06-03 15:52:53.382248 | orchestrator | 2025-06-03 15:52:53 | INFO  | Task 253602b5-2607-4111-b76e-2b088185d6ae is in state STARTED 2025-06-03 15:52:53.382385 | orchestrator | 2025-06-03 15:52:53 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:52:56.416956 | orchestrator | 2025-06-03 15:52:56 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:52:56.417028 | orchestrator | 2025-06-03 15:52:56 | INFO  | Task 98fbb740-60f5-495a-ae75-a50b82311151 is in state STARTED 2025-06-03 15:52:56.417922 | orchestrator | 2025-06-03 15:52:56 | INFO  | Task 4e136a26-1ab3-44f8-baba-4f2e5430b93c is in state STARTED 2025-06-03 15:52:56.418542 | orchestrator | 2025-06-03 15:52:56 | INFO  | Task 253602b5-2607-4111-b76e-2b088185d6ae is in state STARTED 2025-06-03 15:52:56.418576 | orchestrator | 2025-06-03 15:52:56 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:52:59.457426 | orchestrator | 2025-06-03 15:52:59 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:52:59.459672 | orchestrator | 2025-06-03 15:52:59 | INFO  | Task 98fbb740-60f5-495a-ae75-a50b82311151 is in state STARTED 2025-06-03 15:52:59.462402 | orchestrator | 2025-06-03 15:52:59 | INFO  | Task 4e136a26-1ab3-44f8-baba-4f2e5430b93c is in state STARTED 2025-06-03 15:52:59.464554 | orchestrator | 2025-06-03 15:52:59 | INFO  | Task 253602b5-2607-4111-b76e-2b088185d6ae is in state STARTED 2025-06-03 15:52:59.464610 | orchestrator | 2025-06-03 15:52:59 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:53:02.503379 | orchestrator | 2025-06-03 15:53:02 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:53:02.506302 | orchestrator | 2025-06-03 15:53:02 | INFO  | Task 98fbb740-60f5-495a-ae75-a50b82311151 is in state STARTED 2025-06-03 15:53:02.508424 | orchestrator | 2025-06-03 15:53:02 | INFO  | Task 4e136a26-1ab3-44f8-baba-4f2e5430b93c is in state STARTED 2025-06-03 15:53:02.510232 | orchestrator | 2025-06-03 15:53:02 | INFO  | Task 253602b5-2607-4111-b76e-2b088185d6ae is in state STARTED 2025-06-03 15:53:02.510295 | orchestrator | 2025-06-03 15:53:02 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:53:05.557439 | orchestrator | 2025-06-03 15:53:05 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:53:05.558778 | orchestrator | 2025-06-03 15:53:05 | INFO  | Task 98fbb740-60f5-495a-ae75-a50b82311151 is in state STARTED 2025-06-03 15:53:05.561142 | orchestrator | 2025-06-03 15:53:05 | INFO  | Task 4e136a26-1ab3-44f8-baba-4f2e5430b93c is in state STARTED 2025-06-03 15:53:05.563489 | orchestrator | 2025-06-03 15:53:05 | INFO  | Task 253602b5-2607-4111-b76e-2b088185d6ae is in state STARTED 2025-06-03 15:53:05.563536 | orchestrator | 2025-06-03 15:53:05 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:53:08.606636 | orchestrator | 2025-06-03 15:53:08 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:53:08.608668 | orchestrator | 2025-06-03 15:53:08 | INFO  | Task 98fbb740-60f5-495a-ae75-a50b82311151 is in state STARTED 2025-06-03 15:53:08.610342 | orchestrator | 2025-06-03 15:53:08 | INFO  | Task 4e136a26-1ab3-44f8-baba-4f2e5430b93c is in state STARTED 2025-06-03 15:53:08.611978 | orchestrator | 2025-06-03 15:53:08 | INFO  | Task 253602b5-2607-4111-b76e-2b088185d6ae is in state STARTED 2025-06-03 15:53:08.612057 | orchestrator | 2025-06-03 15:53:08 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:53:11.648009 | orchestrator | 2025-06-03 15:53:11 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:53:11.648893 | orchestrator | 2025-06-03 15:53:11 | INFO  | Task 98fbb740-60f5-495a-ae75-a50b82311151 is in state STARTED 2025-06-03 15:53:11.650383 | orchestrator | 2025-06-03 15:53:11 | INFO  | Task 4e136a26-1ab3-44f8-baba-4f2e5430b93c is in state STARTED 2025-06-03 15:53:11.651492 | orchestrator | 2025-06-03 15:53:11 | INFO  | Task 253602b5-2607-4111-b76e-2b088185d6ae is in state STARTED 2025-06-03 15:53:11.651550 | orchestrator | 2025-06-03 15:53:11 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:53:14.690928 | orchestrator | 2025-06-03 15:53:14 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:53:14.692584 | orchestrator | 2025-06-03 15:53:14 | INFO  | Task 98fbb740-60f5-495a-ae75-a50b82311151 is in state STARTED 2025-06-03 15:53:14.694227 | orchestrator | 2025-06-03 15:53:14 | INFO  | Task 4e136a26-1ab3-44f8-baba-4f2e5430b93c is in state STARTED 2025-06-03 15:53:14.697012 | orchestrator | 2025-06-03 15:53:14 | INFO  | Task 253602b5-2607-4111-b76e-2b088185d6ae is in state STARTED 2025-06-03 15:53:14.697079 | orchestrator | 2025-06-03 15:53:14 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:53:17.744548 | orchestrator | 2025-06-03 15:53:17 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:53:17.745031 | orchestrator | 2025-06-03 15:53:17 | INFO  | Task 98fbb740-60f5-495a-ae75-a50b82311151 is in state STARTED 2025-06-03 15:53:17.746711 | orchestrator | 2025-06-03 15:53:17 | INFO  | Task 4e136a26-1ab3-44f8-baba-4f2e5430b93c is in state STARTED 2025-06-03 15:53:17.748927 | orchestrator | 2025-06-03 15:53:17 | INFO  | Task 253602b5-2607-4111-b76e-2b088185d6ae is in state STARTED 2025-06-03 15:53:17.748968 | orchestrator | 2025-06-03 15:53:17 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:53:20.792174 | orchestrator | 2025-06-03 15:53:20 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:53:20.793824 | orchestrator | 2025-06-03 15:53:20 | INFO  | Task 98fbb740-60f5-495a-ae75-a50b82311151 is in state STARTED 2025-06-03 15:53:20.795422 | orchestrator | 2025-06-03 15:53:20 | INFO  | Task 4e136a26-1ab3-44f8-baba-4f2e5430b93c is in state STARTED 2025-06-03 15:53:20.797045 | orchestrator | 2025-06-03 15:53:20 | INFO  | Task 253602b5-2607-4111-b76e-2b088185d6ae is in state STARTED 2025-06-03 15:53:20.797185 | orchestrator | 2025-06-03 15:53:20 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:53:23.832331 | orchestrator | 2025-06-03 15:53:23 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:53:23.832879 | orchestrator | 2025-06-03 15:53:23 | INFO  | Task 98fbb740-60f5-495a-ae75-a50b82311151 is in state STARTED 2025-06-03 15:53:23.833306 | orchestrator | 2025-06-03 15:53:23 | INFO  | Task 4e136a26-1ab3-44f8-baba-4f2e5430b93c is in state STARTED 2025-06-03 15:53:23.834113 | orchestrator | 2025-06-03 15:53:23 | INFO  | Task 253602b5-2607-4111-b76e-2b088185d6ae is in state STARTED 2025-06-03 15:53:23.834170 | orchestrator | 2025-06-03 15:53:23 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:53:26.907757 | orchestrator | 2025-06-03 15:53:26 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:53:26.908340 | orchestrator | 2025-06-03 15:53:26 | INFO  | Task 98fbb740-60f5-495a-ae75-a50b82311151 is in state STARTED 2025-06-03 15:53:26.910508 | orchestrator | 2025-06-03 15:53:26 | INFO  | Task 4e136a26-1ab3-44f8-baba-4f2e5430b93c is in state SUCCESS 2025-06-03 15:53:26.912982 | orchestrator | 2025-06-03 15:53:26.913022 | orchestrator | 2025-06-03 15:53:26.913031 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-03 15:53:26.913042 | orchestrator | 2025-06-03 15:53:26.913051 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-03 15:53:26.913061 | orchestrator | Tuesday 03 June 2025 15:50:04 +0000 (0:00:00.277) 0:00:00.277 ********** 2025-06-03 15:53:26.913071 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:53:26.913080 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:53:26.913090 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:53:26.913099 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:53:26.913108 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:53:26.913118 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:53:26.913128 | orchestrator | 2025-06-03 15:53:26.913137 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-03 15:53:26.913146 | orchestrator | Tuesday 03 June 2025 15:50:05 +0000 (0:00:00.708) 0:00:00.985 ********** 2025-06-03 15:53:26.913192 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2025-06-03 15:53:26.913215 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2025-06-03 15:53:26.913221 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2025-06-03 15:53:26.913227 | orchestrator | ok: [testbed-node-3] => (item=enable_cinder_True) 2025-06-03 15:53:26.913233 | orchestrator | ok: [testbed-node-4] => (item=enable_cinder_True) 2025-06-03 15:53:26.913239 | orchestrator | ok: [testbed-node-5] => (item=enable_cinder_True) 2025-06-03 15:53:26.913249 | orchestrator | 2025-06-03 15:53:26.913259 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2025-06-03 15:53:26.913268 | orchestrator | 2025-06-03 15:53:26.913277 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-06-03 15:53:26.913286 | orchestrator | Tuesday 03 June 2025 15:50:06 +0000 (0:00:00.616) 0:00:01.602 ********** 2025-06-03 15:53:26.913297 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-03 15:53:26.913309 | orchestrator | 2025-06-03 15:53:26.913352 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2025-06-03 15:53:26.913373 | orchestrator | Tuesday 03 June 2025 15:50:07 +0000 (0:00:01.213) 0:00:02.815 ********** 2025-06-03 15:53:26.913380 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2025-06-03 15:53:26.913387 | orchestrator | 2025-06-03 15:53:26.913437 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2025-06-03 15:53:26.913447 | orchestrator | Tuesday 03 June 2025 15:50:10 +0000 (0:00:03.283) 0:00:06.099 ********** 2025-06-03 15:53:26.913457 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2025-06-03 15:53:26.913467 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2025-06-03 15:53:26.913476 | orchestrator | 2025-06-03 15:53:26.913485 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2025-06-03 15:53:26.913494 | orchestrator | Tuesday 03 June 2025 15:50:17 +0000 (0:00:06.467) 0:00:12.567 ********** 2025-06-03 15:53:26.913504 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-03 15:53:26.913514 | orchestrator | 2025-06-03 15:53:26.913554 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2025-06-03 15:53:26.913560 | orchestrator | Tuesday 03 June 2025 15:50:20 +0000 (0:00:03.398) 0:00:15.965 ********** 2025-06-03 15:53:26.913566 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-03 15:53:26.913572 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2025-06-03 15:53:26.913578 | orchestrator | 2025-06-03 15:53:26.913584 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2025-06-03 15:53:26.913614 | orchestrator | Tuesday 03 June 2025 15:50:24 +0000 (0:00:03.871) 0:00:19.837 ********** 2025-06-03 15:53:26.913620 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-03 15:53:26.913626 | orchestrator | 2025-06-03 15:53:26.913632 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2025-06-03 15:53:26.913638 | orchestrator | Tuesday 03 June 2025 15:50:27 +0000 (0:00:03.389) 0:00:23.226 ********** 2025-06-03 15:53:26.913643 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2025-06-03 15:53:26.913649 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2025-06-03 15:53:26.913655 | orchestrator | 2025-06-03 15:53:26.913660 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2025-06-03 15:53:26.913666 | orchestrator | Tuesday 03 June 2025 15:50:35 +0000 (0:00:08.247) 0:00:31.474 ********** 2025-06-03 15:53:26.913675 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-03 15:53:26.913706 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-03 15:53:26.913714 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-03 15:53:26.913720 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-03 15:53:26.913731 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-03 15:53:26.913738 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-03 15:53:26.913750 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-03 15:53:26.913757 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-03 15:53:26.913764 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-03 15:53:26.913770 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-03 15:53:26.913836 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-03 15:53:26.913845 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-03 15:53:26.913851 | orchestrator | 2025-06-03 15:53:26.913861 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-06-03 15:53:26.913867 | orchestrator | Tuesday 03 June 2025 15:50:37 +0000 (0:00:01.980) 0:00:33.455 ********** 2025-06-03 15:53:26.913873 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:53:26.913879 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:53:26.913885 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:53:26.913890 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:53:26.913896 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:53:26.913902 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:53:26.913907 | orchestrator | 2025-06-03 15:53:26.913913 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-06-03 15:53:26.913919 | orchestrator | Tuesday 03 June 2025 15:50:38 +0000 (0:00:00.487) 0:00:33.942 ********** 2025-06-03 15:53:26.913925 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:53:26.913930 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:53:26.913936 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:53:26.913945 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-03 15:53:26.913951 | orchestrator | 2025-06-03 15:53:26.913957 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2025-06-03 15:53:26.913962 | orchestrator | Tuesday 03 June 2025 15:50:39 +0000 (0:00:00.989) 0:00:34.931 ********** 2025-06-03 15:53:26.913968 | orchestrator | changed: [testbed-node-3] => (item=cinder-volume) 2025-06-03 15:53:26.913974 | orchestrator | changed: [testbed-node-4] => (item=cinder-volume) 2025-06-03 15:53:26.913980 | orchestrator | changed: [testbed-node-5] => (item=cinder-volume) 2025-06-03 15:53:26.913985 | orchestrator | changed: [testbed-node-3] => (item=cinder-backup) 2025-06-03 15:53:26.913991 | orchestrator | changed: [testbed-node-4] => (item=cinder-backup) 2025-06-03 15:53:26.913997 | orchestrator | changed: [testbed-node-5] => (item=cinder-backup) 2025-06-03 15:53:26.914007 | orchestrator | 2025-06-03 15:53:26.914076 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2025-06-03 15:53:26.914085 | orchestrator | Tuesday 03 June 2025 15:50:41 +0000 (0:00:01.972) 0:00:36.904 ********** 2025-06-03 15:53:26.914093 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-06-03 15:53:26.914101 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-06-03 15:53:26.914108 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-06-03 15:53:26.914120 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-06-03 15:53:26.914130 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-06-03 15:53:26.914142 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-06-03 15:53:26.914149 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-06-03 15:53:26.914156 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-06-03 15:53:26.914170 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-06-03 15:53:26.914177 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-06-03 15:53:26.914188 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-06-03 15:53:26.914194 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-06-03 15:53:26.914200 | orchestrator | 2025-06-03 15:53:26.914206 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2025-06-03 15:53:26.914212 | orchestrator | Tuesday 03 June 2025 15:50:45 +0000 (0:00:03.626) 0:00:40.530 ********** 2025-06-03 15:53:26.914218 | orchestrator | changed: [testbed-node-3] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-06-03 15:53:26.914225 | orchestrator | changed: [testbed-node-4] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-06-03 15:53:26.914231 | orchestrator | changed: [testbed-node-5] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-06-03 15:53:26.914237 | orchestrator | 2025-06-03 15:53:26.914242 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2025-06-03 15:53:26.914248 | orchestrator | Tuesday 03 June 2025 15:50:48 +0000 (0:00:03.256) 0:00:43.786 ********** 2025-06-03 15:53:26.914254 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder.keyring) 2025-06-03 15:53:26.914260 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder.keyring) 2025-06-03 15:53:26.914265 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder.keyring) 2025-06-03 15:53:26.914271 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder-backup.keyring) 2025-06-03 15:53:26.914277 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder-backup.keyring) 2025-06-03 15:53:26.914286 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder-backup.keyring) 2025-06-03 15:53:26.914292 | orchestrator | 2025-06-03 15:53:26.914298 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2025-06-03 15:53:26.914303 | orchestrator | Tuesday 03 June 2025 15:50:52 +0000 (0:00:03.929) 0:00:47.716 ********** 2025-06-03 15:53:26.914309 | orchestrator | ok: [testbed-node-3] => (item=cinder-volume) 2025-06-03 15:53:26.914321 | orchestrator | ok: [testbed-node-4] => (item=cinder-volume) 2025-06-03 15:53:26.914330 | orchestrator | ok: [testbed-node-5] => (item=cinder-volume) 2025-06-03 15:53:26.914339 | orchestrator | ok: [testbed-node-3] => (item=cinder-backup) 2025-06-03 15:53:26.914348 | orchestrator | ok: [testbed-node-4] => (item=cinder-backup) 2025-06-03 15:53:26.914358 | orchestrator | ok: [testbed-node-5] => (item=cinder-backup) 2025-06-03 15:53:26.914367 | orchestrator | 2025-06-03 15:53:26.914376 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2025-06-03 15:53:26.914384 | orchestrator | Tuesday 03 June 2025 15:50:53 +0000 (0:00:00.916) 0:00:48.633 ********** 2025-06-03 15:53:26.914403 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:53:26.914414 | orchestrator | 2025-06-03 15:53:26.914424 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2025-06-03 15:53:26.914434 | orchestrator | Tuesday 03 June 2025 15:50:53 +0000 (0:00:00.130) 0:00:48.763 ********** 2025-06-03 15:53:26.914445 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:53:26.914455 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:53:26.914465 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:53:26.914472 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:53:26.914477 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:53:26.914483 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:53:26.914488 | orchestrator | 2025-06-03 15:53:26.914494 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-06-03 15:53:26.914500 | orchestrator | Tuesday 03 June 2025 15:50:53 +0000 (0:00:00.731) 0:00:49.495 ********** 2025-06-03 15:53:26.914507 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-03 15:53:26.914514 | orchestrator | 2025-06-03 15:53:26.914520 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2025-06-03 15:53:26.914526 | orchestrator | Tuesday 03 June 2025 15:50:55 +0000 (0:00:01.570) 0:00:51.065 ********** 2025-06-03 15:53:26.914532 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-03 15:53:26.914539 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-03 15:53:26.914551 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-03 15:53:26.914567 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-03 15:53:26.914574 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-03 15:53:26.914580 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-03 15:53:26.914586 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-03 15:53:26.914593 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-03 15:53:26.914609 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-03 15:53:26.914618 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-03 15:53:26.914624 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-03 15:53:26.914631 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-03 15:53:26.914637 | orchestrator | 2025-06-03 15:53:26.914643 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2025-06-03 15:53:26.914649 | orchestrator | Tuesday 03 June 2025 15:50:58 +0000 (0:00:03.379) 0:00:54.445 ********** 2025-06-03 15:53:26.914655 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-03 15:53:26.914668 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-03 15:53:26.914691 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-03 15:53:26.914705 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-03 15:53:26.914711 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-03 15:53:26.914717 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:53:26.914724 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-03 15:53:26.914734 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:53:26.914740 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:53:26.914746 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-03 15:53:26.914757 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-03 15:53:26.914763 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:53:26.914773 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-03 15:53:26.914795 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-03 15:53:26.914805 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:53:26.914811 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-03 15:53:26.914822 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-03 15:53:26.914828 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:53:26.914834 | orchestrator | 2025-06-03 15:53:26.914840 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2025-06-03 15:53:26.914846 | orchestrator | Tuesday 03 June 2025 15:51:00 +0000 (0:00:01.648) 0:00:56.094 ********** 2025-06-03 15:53:26.914860 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-03 15:53:26.914867 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-03 15:53:26.914873 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:53:26.914879 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-03 15:53:26.914885 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-03 15:53:26.914895 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-03 15:53:26.914901 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:53:26.914914 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-03 15:53:26.914920 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:53:26.914929 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-03 15:53:26.914936 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-03 15:53:26.914942 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:53:26.914950 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-03 15:53:26.914965 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-03 15:53:26.914974 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:53:26.914989 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-03 15:53:26.915003 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-03 15:53:26.915013 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:53:26.915068 | orchestrator | 2025-06-03 15:53:26.915076 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2025-06-03 15:53:26.915082 | orchestrator | Tuesday 03 June 2025 15:51:02 +0000 (0:00:01.991) 0:00:58.085 ********** 2025-06-03 15:53:26.915088 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-03 15:53:26.915105 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-03 15:53:26.915112 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-03 15:53:26.915124 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-03 15:53:26.915134 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-03 15:53:26.915140 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-03 15:53:26.915151 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-03 15:53:26.915157 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-03 15:53:26.915163 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-03 15:53:26.915173 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-03 15:53:26.915182 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-03 15:53:26.915189 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-03 15:53:26.915199 | orchestrator | 2025-06-03 15:53:26.915205 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2025-06-03 15:53:26.915211 | orchestrator | Tuesday 03 June 2025 15:51:05 +0000 (0:00:03.301) 0:01:01.386 ********** 2025-06-03 15:53:26.915217 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-06-03 15:53:26.915223 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-06-03 15:53:26.915229 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:53:26.915235 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-06-03 15:53:26.915240 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:53:26.915246 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-06-03 15:53:26.915252 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:53:26.915257 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-06-03 15:53:26.915265 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-06-03 15:53:26.915274 | orchestrator | 2025-06-03 15:53:26.915283 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2025-06-03 15:53:26.915293 | orchestrator | Tuesday 03 June 2025 15:51:08 +0000 (0:00:02.807) 0:01:04.194 ********** 2025-06-03 15:53:26.915302 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-03 15:53:26.915629 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-03 15:53:26.915654 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-03 15:53:26.915674 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-03 15:53:26.915681 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-03 15:53:26.915693 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-03 15:53:26.915700 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-03 15:53:26.915710 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-03 15:53:26.915720 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-03 15:53:26.915727 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-03 15:53:26.915734 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-03 15:53:26.915740 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-03 15:53:26.915746 | orchestrator | 2025-06-03 15:53:26.915752 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2025-06-03 15:53:26.915758 | orchestrator | Tuesday 03 June 2025 15:51:18 +0000 (0:00:09.899) 0:01:14.093 ********** 2025-06-03 15:53:26.915767 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:53:26.915774 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:53:26.915831 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:53:26.915837 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:53:26.915843 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:53:26.915849 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:53:26.915855 | orchestrator | 2025-06-03 15:53:26.915860 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2025-06-03 15:53:26.915866 | orchestrator | Tuesday 03 June 2025 15:51:20 +0000 (0:00:02.035) 0:01:16.128 ********** 2025-06-03 15:53:26.915876 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-03 15:53:26.915888 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-03 15:53:26.915894 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-03 15:53:26.915901 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-03 15:53:26.915907 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:53:26.915913 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:53:26.915923 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-03 15:53:26.915933 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-03 15:53:26.915944 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:53:26.915951 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-03 15:53:26.915957 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-03 15:53:26.915963 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:53:26.915969 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-03 15:53:26.915975 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-03 15:53:26.915981 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:53:26.915995 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-03 15:53:26.916006 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-03 15:53:26.916012 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:53:26.916018 | orchestrator | 2025-06-03 15:53:26.916023 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2025-06-03 15:53:26.916029 | orchestrator | Tuesday 03 June 2025 15:51:21 +0000 (0:00:01.283) 0:01:17.412 ********** 2025-06-03 15:53:26.916035 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:53:26.916041 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:53:26.916047 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:53:26.916053 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:53:26.916058 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:53:26.916064 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:53:26.916070 | orchestrator | 2025-06-03 15:53:26.916076 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2025-06-03 15:53:26.916082 | orchestrator | Tuesday 03 June 2025 15:51:22 +0000 (0:00:00.863) 0:01:18.275 ********** 2025-06-03 15:53:26.916088 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-03 15:53:26.916094 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-03 15:53:26.916113 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-03 15:53:26.916128 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-03 15:53:26.916138 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-03 15:53:26.916149 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-03 15:53:26.916159 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-03 15:53:26.916181 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-03 15:53:26.916198 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-03 15:53:26.916208 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-03 15:53:26.916219 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-03 15:53:26.916227 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-03 15:53:26.916234 | orchestrator | 2025-06-03 15:53:26.916241 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-06-03 15:53:26.916247 | orchestrator | Tuesday 03 June 2025 15:51:25 +0000 (0:00:02.424) 0:01:20.699 ********** 2025-06-03 15:53:26.916254 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:53:26.916260 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:53:26.916267 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:53:26.916279 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:53:26.916285 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:53:26.916292 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:53:26.916298 | orchestrator | 2025-06-03 15:53:26.916305 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2025-06-03 15:53:26.916312 | orchestrator | Tuesday 03 June 2025 15:51:25 +0000 (0:00:00.660) 0:01:21.360 ********** 2025-06-03 15:53:26.916319 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:53:26.916324 | orchestrator | 2025-06-03 15:53:26.916330 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2025-06-03 15:53:26.916336 | orchestrator | Tuesday 03 June 2025 15:51:28 +0000 (0:00:02.194) 0:01:23.555 ********** 2025-06-03 15:53:26.916342 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:53:26.916348 | orchestrator | 2025-06-03 15:53:26.916353 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2025-06-03 15:53:26.916359 | orchestrator | Tuesday 03 June 2025 15:51:30 +0000 (0:00:02.100) 0:01:25.656 ********** 2025-06-03 15:53:26.916365 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:53:26.916371 | orchestrator | 2025-06-03 15:53:26.916377 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-06-03 15:53:26.916383 | orchestrator | Tuesday 03 June 2025 15:51:55 +0000 (0:00:25.807) 0:01:51.464 ********** 2025-06-03 15:53:26.916389 | orchestrator | 2025-06-03 15:53:26.916398 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-06-03 15:53:26.916408 | orchestrator | Tuesday 03 June 2025 15:51:56 +0000 (0:00:00.067) 0:01:51.531 ********** 2025-06-03 15:53:26.916417 | orchestrator | 2025-06-03 15:53:26.916426 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-06-03 15:53:26.916435 | orchestrator | Tuesday 03 June 2025 15:51:56 +0000 (0:00:00.064) 0:01:51.596 ********** 2025-06-03 15:53:26.916446 | orchestrator | 2025-06-03 15:53:26.916452 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-06-03 15:53:26.916458 | orchestrator | Tuesday 03 June 2025 15:51:56 +0000 (0:00:00.066) 0:01:51.662 ********** 2025-06-03 15:53:26.916463 | orchestrator | 2025-06-03 15:53:26.916469 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-06-03 15:53:26.916475 | orchestrator | Tuesday 03 June 2025 15:51:56 +0000 (0:00:00.070) 0:01:51.732 ********** 2025-06-03 15:53:26.916480 | orchestrator | 2025-06-03 15:53:26.916490 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-06-03 15:53:26.916496 | orchestrator | Tuesday 03 June 2025 15:51:56 +0000 (0:00:00.064) 0:01:51.797 ********** 2025-06-03 15:53:26.916501 | orchestrator | 2025-06-03 15:53:26.916507 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2025-06-03 15:53:26.916513 | orchestrator | Tuesday 03 June 2025 15:51:56 +0000 (0:00:00.070) 0:01:51.867 ********** 2025-06-03 15:53:26.916519 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:53:26.916524 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:53:26.916530 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:53:26.916536 | orchestrator | 2025-06-03 15:53:26.916542 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2025-06-03 15:53:26.916548 | orchestrator | Tuesday 03 June 2025 15:52:24 +0000 (0:00:27.915) 0:02:19.783 ********** 2025-06-03 15:53:26.916553 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:53:26.916559 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:53:26.916565 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:53:26.916571 | orchestrator | 2025-06-03 15:53:26.916576 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2025-06-03 15:53:26.916582 | orchestrator | Tuesday 03 June 2025 15:52:34 +0000 (0:00:10.619) 0:02:30.403 ********** 2025-06-03 15:53:26.916588 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:53:26.916593 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:53:26.916599 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:53:26.916605 | orchestrator | 2025-06-03 15:53:26.916610 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2025-06-03 15:53:26.916621 | orchestrator | Tuesday 03 June 2025 15:53:14 +0000 (0:00:39.304) 0:03:09.708 ********** 2025-06-03 15:53:26.916627 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:53:26.916633 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:53:26.916639 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:53:26.916644 | orchestrator | 2025-06-03 15:53:26.916650 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2025-06-03 15:53:26.916656 | orchestrator | Tuesday 03 June 2025 15:53:25 +0000 (0:00:10.952) 0:03:20.660 ********** 2025-06-03 15:53:26.916662 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:53:26.916667 | orchestrator | 2025-06-03 15:53:26.916673 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-03 15:53:26.916679 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-06-03 15:53:26.916686 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-06-03 15:53:26.916692 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-06-03 15:53:26.916698 | orchestrator | testbed-node-3 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-06-03 15:53:26.916704 | orchestrator | testbed-node-4 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-06-03 15:53:26.916710 | orchestrator | testbed-node-5 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-06-03 15:53:26.916715 | orchestrator | 2025-06-03 15:53:26.916721 | orchestrator | 2025-06-03 15:53:26.916727 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-03 15:53:26.916733 | orchestrator | Tuesday 03 June 2025 15:53:25 +0000 (0:00:00.633) 0:03:21.294 ********** 2025-06-03 15:53:26.916739 | orchestrator | =============================================================================== 2025-06-03 15:53:26.916744 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 39.30s 2025-06-03 15:53:26.916750 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 27.92s 2025-06-03 15:53:26.916756 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 25.81s 2025-06-03 15:53:26.916762 | orchestrator | cinder : Restart cinder-backup container ------------------------------- 10.95s 2025-06-03 15:53:26.916768 | orchestrator | cinder : Restart cinder-scheduler container ---------------------------- 10.62s 2025-06-03 15:53:26.916773 | orchestrator | cinder : Copying over cinder.conf --------------------------------------- 9.90s 2025-06-03 15:53:26.916795 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 8.25s 2025-06-03 15:53:26.916802 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 6.47s 2025-06-03 15:53:26.916812 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 3.93s 2025-06-03 15:53:26.916818 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 3.87s 2025-06-03 15:53:26.916823 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 3.63s 2025-06-03 15:53:26.916829 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 3.40s 2025-06-03 15:53:26.916835 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.39s 2025-06-03 15:53:26.916840 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 3.38s 2025-06-03 15:53:26.916846 | orchestrator | cinder : Copying over config.json files for services -------------------- 3.30s 2025-06-03 15:53:26.916852 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 3.28s 2025-06-03 15:53:26.916863 | orchestrator | cinder : Copy over Ceph keyring files for cinder-volume ----------------- 3.26s 2025-06-03 15:53:26.916872 | orchestrator | cinder : Copying over cinder-wsgi.conf ---------------------------------- 2.81s 2025-06-03 15:53:26.916878 | orchestrator | cinder : Check cinder containers ---------------------------------------- 2.42s 2025-06-03 15:53:26.916884 | orchestrator | cinder : Creating Cinder database --------------------------------------- 2.19s 2025-06-03 15:53:26.916889 | orchestrator | 2025-06-03 15:53:26 | INFO  | Task 253602b5-2607-4111-b76e-2b088185d6ae is in state STARTED 2025-06-03 15:53:26.916895 | orchestrator | 2025-06-03 15:53:26 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:53:29.985567 | orchestrator | 2025-06-03 15:53:29 | INFO  | Task b2168855-96c1-45d7-9527-31917d808d5f is in state STARTED 2025-06-03 15:53:29.986478 | orchestrator | 2025-06-03 15:53:29 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:53:29.988481 | orchestrator | 2025-06-03 15:53:29 | INFO  | Task 98fbb740-60f5-495a-ae75-a50b82311151 is in state STARTED 2025-06-03 15:53:29.990337 | orchestrator | 2025-06-03 15:53:29 | INFO  | Task 253602b5-2607-4111-b76e-2b088185d6ae is in state STARTED 2025-06-03 15:53:29.990373 | orchestrator | 2025-06-03 15:53:29 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:53:33.024160 | orchestrator | 2025-06-03 15:53:33 | INFO  | Task b2168855-96c1-45d7-9527-31917d808d5f is in state STARTED 2025-06-03 15:53:33.025724 | orchestrator | 2025-06-03 15:53:33 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:53:33.027590 | orchestrator | 2025-06-03 15:53:33 | INFO  | Task 98fbb740-60f5-495a-ae75-a50b82311151 is in state STARTED 2025-06-03 15:53:33.028537 | orchestrator | 2025-06-03 15:53:33 | INFO  | Task 253602b5-2607-4111-b76e-2b088185d6ae is in state SUCCESS 2025-06-03 15:53:33.029164 | orchestrator | 2025-06-03 15:53:33.029205 | orchestrator | 2025-06-03 15:53:33.029214 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-03 15:53:33.029223 | orchestrator | 2025-06-03 15:53:33.029230 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-03 15:53:33.029238 | orchestrator | Tuesday 03 June 2025 15:52:38 +0000 (0:00:00.234) 0:00:00.234 ********** 2025-06-03 15:53:33.029246 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:53:33.029255 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:53:33.029263 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:53:33.029270 | orchestrator | 2025-06-03 15:53:33.029278 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-03 15:53:33.029287 | orchestrator | Tuesday 03 June 2025 15:52:38 +0000 (0:00:00.251) 0:00:00.486 ********** 2025-06-03 15:53:33.029293 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2025-06-03 15:53:33.029298 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2025-06-03 15:53:33.029303 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2025-06-03 15:53:33.029307 | orchestrator | 2025-06-03 15:53:33.029312 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2025-06-03 15:53:33.029316 | orchestrator | 2025-06-03 15:53:33.029321 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-06-03 15:53:33.029326 | orchestrator | Tuesday 03 June 2025 15:52:38 +0000 (0:00:00.337) 0:00:00.823 ********** 2025-06-03 15:53:33.029331 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:53:33.029336 | orchestrator | 2025-06-03 15:53:33.029341 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2025-06-03 15:53:33.029345 | orchestrator | Tuesday 03 June 2025 15:52:39 +0000 (0:00:00.667) 0:00:01.490 ********** 2025-06-03 15:53:33.029350 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2025-06-03 15:53:33.029355 | orchestrator | 2025-06-03 15:53:33.029359 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2025-06-03 15:53:33.029388 | orchestrator | Tuesday 03 June 2025 15:52:42 +0000 (0:00:03.379) 0:00:04.870 ********** 2025-06-03 15:53:33.029393 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2025-06-03 15:53:33.029397 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2025-06-03 15:53:33.029402 | orchestrator | 2025-06-03 15:53:33.029406 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2025-06-03 15:53:33.029411 | orchestrator | Tuesday 03 June 2025 15:52:49 +0000 (0:00:06.496) 0:00:11.366 ********** 2025-06-03 15:53:33.029415 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-03 15:53:33.029420 | orchestrator | 2025-06-03 15:53:33.029424 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2025-06-03 15:53:33.029428 | orchestrator | Tuesday 03 June 2025 15:52:52 +0000 (0:00:02.995) 0:00:14.362 ********** 2025-06-03 15:53:33.029433 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-03 15:53:33.029438 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-06-03 15:53:33.029449 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-06-03 15:53:33.029454 | orchestrator | 2025-06-03 15:53:33.029458 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2025-06-03 15:53:33.029463 | orchestrator | Tuesday 03 June 2025 15:53:00 +0000 (0:00:07.814) 0:00:22.176 ********** 2025-06-03 15:53:33.029468 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-03 15:53:33.029472 | orchestrator | 2025-06-03 15:53:33.029476 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2025-06-03 15:53:33.029481 | orchestrator | Tuesday 03 June 2025 15:53:03 +0000 (0:00:03.276) 0:00:25.453 ********** 2025-06-03 15:53:33.029496 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2025-06-03 15:53:33.029501 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2025-06-03 15:53:33.029505 | orchestrator | 2025-06-03 15:53:33.029509 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2025-06-03 15:53:33.029514 | orchestrator | Tuesday 03 June 2025 15:53:10 +0000 (0:00:07.542) 0:00:32.995 ********** 2025-06-03 15:53:33.029518 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2025-06-03 15:53:33.029522 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2025-06-03 15:53:33.029527 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2025-06-03 15:53:33.029531 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2025-06-03 15:53:33.029536 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2025-06-03 15:53:33.029540 | orchestrator | 2025-06-03 15:53:33.029545 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-06-03 15:53:33.029549 | orchestrator | Tuesday 03 June 2025 15:53:26 +0000 (0:00:15.888) 0:00:48.884 ********** 2025-06-03 15:53:33.029553 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:53:33.029558 | orchestrator | 2025-06-03 15:53:33.029562 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2025-06-03 15:53:33.029566 | orchestrator | Tuesday 03 June 2025 15:53:27 +0000 (0:00:00.588) 0:00:49.473 ********** 2025-06-03 15:53:33.029572 | orchestrator | An exception occurred during task execution. To see the full traceback, use -vvv. The error was: keystoneauth1.exceptions.catalog.EndpointNotFound: internal endpoint for compute service in RegionOne region not found 2025-06-03 15:53:33.029599 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"action": "os_nova_flavor", "changed": false, "module_stderr": "Traceback (most recent call last):\n File \"/tmp/ansible-tmp-1748966008.8587992-6744-82420319592423/AnsiballZ_compute_flavor.py\", line 107, in \n _ansiballz_main()\n File \"/tmp/ansible-tmp-1748966008.8587992-6744-82420319592423/AnsiballZ_compute_flavor.py\", line 99, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/tmp/ansible-tmp-1748966008.8587992-6744-82420319592423/AnsiballZ_compute_flavor.py\", line 47, in invoke_module\n runpy.run_module(mod_name='ansible_collections.openstack.cloud.plugins.modules.compute_flavor', init_globals=dict(_module_fqn='ansible_collections.openstack.cloud.plugins.modules.compute_flavor', _modlib_path=modlib_path),\n File \"\", line 226, in run_module\n File \"\", line 98, in _run_module_code\n File \"\", line 88, in _run_code\n File \"/tmp/ansible_os_nova_flavor_payload_1w8hbacv/ansible_os_nova_flavor_payload.zip/ansible_collections/openstack/cloud/plugins/modules/compute_flavor.py\", line 367, in \n File \"/tmp/ansible_os_nova_flavor_payload_1w8hbacv/ansible_os_nova_flavor_payload.zip/ansible_collections/openstack/cloud/plugins/modules/compute_flavor.py\", line 363, in main\n File \"/tmp/ansible_os_nova_flavor_payload_1w8hbacv/ansible_os_nova_flavor_payload.zip/ansible_collections/openstack/cloud/plugins/module_utils/openstack.py\", line 417, in __call__\n File \"/tmp/ansible_os_nova_flavor_payload_1w8hbacv/ansible_os_nova_flavor_payload.zip/ansible_collections/openstack/cloud/plugins/modules/compute_flavor.py\", line 220, in run\n File \"/opt/ansible/lib/python3.11/site-packages/openstack/service_description.py\", line 88, in __get__\n proxy = self._make_proxy(instance)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.11/site-packages/openstack/service_description.py\", line 286, in _make_proxy\n found_version = temp_adapter.get_api_major_version()\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.11/site-packages/keystoneauth1/adapter.py\", line 352, in get_api_major_version\n return self.session.get_api_major_version(auth or self.auth, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.11/site-packages/keystoneauth1/session.py\", line 1289, in get_api_major_version\n return auth.get_api_major_version(self, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.11/site-packages/keystoneauth1/identity/base.py\", line 497, in get_api_major_version\n data = get_endpoint_data(discover_versions=discover_versions)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.11/site-packages/keystoneauth1/identity/base.py\", line 272, in get_endpoint_data\n endpoint_data = service_catalog.endpoint_data_for(\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.11/site-packages/keystoneauth1/access/service_catalog.py\", line 459, in endpoint_data_for\n raise exceptions.EndpointNotFound(msg)\nkeystoneauth1.exceptions.catalog.EndpointNotFound: internal endpoint for compute service in RegionOne region not found\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1} 2025-06-03 15:53:33.029619 | orchestrator | 2025-06-03 15:53:33.029624 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-03 15:53:33.029629 | orchestrator | testbed-node-0 : ok=11  changed=5  unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2025-06-03 15:53:33.029637 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-03 15:53:33.029645 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-03 15:53:33.029651 | orchestrator | 2025-06-03 15:53:33.029659 | orchestrator | 2025-06-03 15:53:33.029670 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-03 15:53:33.029676 | orchestrator | Tuesday 03 June 2025 15:53:30 +0000 (0:00:03.417) 0:00:52.890 ********** 2025-06-03 15:53:33.029693 | orchestrator | =============================================================================== 2025-06-03 15:53:33.029700 | orchestrator | octavia : Adding octavia related roles --------------------------------- 15.89s 2025-06-03 15:53:33.029707 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 7.81s 2025-06-03 15:53:33.029714 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 7.54s 2025-06-03 15:53:33.029720 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 6.50s 2025-06-03 15:53:33.029726 | orchestrator | octavia : Create amphora flavor ----------------------------------------- 3.42s 2025-06-03 15:53:33.029733 | orchestrator | service-ks-register : octavia | Creating services ----------------------- 3.38s 2025-06-03 15:53:33.029740 | orchestrator | service-ks-register : octavia | Creating roles -------------------------- 3.28s 2025-06-03 15:53:33.029747 | orchestrator | service-ks-register : octavia | Creating projects ----------------------- 3.00s 2025-06-03 15:53:33.029754 | orchestrator | octavia : include_tasks ------------------------------------------------- 0.67s 2025-06-03 15:53:33.029761 | orchestrator | octavia : include_tasks ------------------------------------------------- 0.59s 2025-06-03 15:53:33.029790 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.34s 2025-06-03 15:53:33.029798 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.25s 2025-06-03 15:53:33.029805 | orchestrator | 2025-06-03 15:53:33 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:53:36.072824 | orchestrator | 2025-06-03 15:53:36 | INFO  | Task b2168855-96c1-45d7-9527-31917d808d5f is in state STARTED 2025-06-03 15:53:36.074381 | orchestrator | 2025-06-03 15:53:36 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:53:36.074445 | orchestrator | 2025-06-03 15:53:36 | INFO  | Task 98fbb740-60f5-495a-ae75-a50b82311151 is in state STARTED 2025-06-03 15:53:36.074455 | orchestrator | 2025-06-03 15:53:36 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:53:39.116247 | orchestrator | 2025-06-03 15:53:39 | INFO  | Task b2168855-96c1-45d7-9527-31917d808d5f is in state STARTED 2025-06-03 15:53:39.118375 | orchestrator | 2025-06-03 15:53:39 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:53:39.121168 | orchestrator | 2025-06-03 15:53:39 | INFO  | Task 98fbb740-60f5-495a-ae75-a50b82311151 is in state STARTED 2025-06-03 15:53:39.121349 | orchestrator | 2025-06-03 15:53:39 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:53:42.165692 | orchestrator | 2025-06-03 15:53:42 | INFO  | Task b2168855-96c1-45d7-9527-31917d808d5f is in state STARTED 2025-06-03 15:53:42.166941 | orchestrator | 2025-06-03 15:53:42 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:53:42.169049 | orchestrator | 2025-06-03 15:53:42 | INFO  | Task 98fbb740-60f5-495a-ae75-a50b82311151 is in state STARTED 2025-06-03 15:53:42.169087 | orchestrator | 2025-06-03 15:53:42 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:53:45.203812 | orchestrator | 2025-06-03 15:53:45 | INFO  | Task b2168855-96c1-45d7-9527-31917d808d5f is in state STARTED 2025-06-03 15:53:45.205013 | orchestrator | 2025-06-03 15:53:45 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:53:45.208225 | orchestrator | 2025-06-03 15:53:45 | INFO  | Task 98fbb740-60f5-495a-ae75-a50b82311151 is in state STARTED 2025-06-03 15:53:45.208714 | orchestrator | 2025-06-03 15:53:45 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:53:48.246493 | orchestrator | 2025-06-03 15:53:48 | INFO  | Task b2168855-96c1-45d7-9527-31917d808d5f is in state STARTED 2025-06-03 15:53:48.247370 | orchestrator | 2025-06-03 15:53:48 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:53:48.248719 | orchestrator | 2025-06-03 15:53:48 | INFO  | Task 98fbb740-60f5-495a-ae75-a50b82311151 is in state STARTED 2025-06-03 15:53:48.248930 | orchestrator | 2025-06-03 15:53:48 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:53:51.284408 | orchestrator | 2025-06-03 15:53:51 | INFO  | Task b2168855-96c1-45d7-9527-31917d808d5f is in state STARTED 2025-06-03 15:53:51.285976 | orchestrator | 2025-06-03 15:53:51 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:53:51.287118 | orchestrator | 2025-06-03 15:53:51 | INFO  | Task 98fbb740-60f5-495a-ae75-a50b82311151 is in state STARTED 2025-06-03 15:53:51.287254 | orchestrator | 2025-06-03 15:53:51 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:53:54.335407 | orchestrator | 2025-06-03 15:53:54 | INFO  | Task b2168855-96c1-45d7-9527-31917d808d5f is in state STARTED 2025-06-03 15:53:54.338823 | orchestrator | 2025-06-03 15:53:54 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:53:54.339593 | orchestrator | 2025-06-03 15:53:54 | INFO  | Task 98fbb740-60f5-495a-ae75-a50b82311151 is in state STARTED 2025-06-03 15:53:54.339646 | orchestrator | 2025-06-03 15:53:54 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:53:57.382567 | orchestrator | 2025-06-03 15:53:57 | INFO  | Task b2168855-96c1-45d7-9527-31917d808d5f is in state STARTED 2025-06-03 15:53:57.384271 | orchestrator | 2025-06-03 15:53:57 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:53:57.386159 | orchestrator | 2025-06-03 15:53:57 | INFO  | Task 98fbb740-60f5-495a-ae75-a50b82311151 is in state STARTED 2025-06-03 15:53:57.386207 | orchestrator | 2025-06-03 15:53:57 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:54:00.421650 | orchestrator | 2025-06-03 15:54:00 | INFO  | Task b2168855-96c1-45d7-9527-31917d808d5f is in state STARTED 2025-06-03 15:54:00.422212 | orchestrator | 2025-06-03 15:54:00 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:54:00.423517 | orchestrator | 2025-06-03 15:54:00 | INFO  | Task 98fbb740-60f5-495a-ae75-a50b82311151 is in state STARTED 2025-06-03 15:54:00.423534 | orchestrator | 2025-06-03 15:54:00 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:54:03.468932 | orchestrator | 2025-06-03 15:54:03 | INFO  | Task b2168855-96c1-45d7-9527-31917d808d5f is in state STARTED 2025-06-03 15:54:03.470496 | orchestrator | 2025-06-03 15:54:03 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:54:03.474275 | orchestrator | 2025-06-03 15:54:03 | INFO  | Task 98fbb740-60f5-495a-ae75-a50b82311151 is in state STARTED 2025-06-03 15:54:03.474332 | orchestrator | 2025-06-03 15:54:03 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:54:06.513029 | orchestrator | 2025-06-03 15:54:06 | INFO  | Task b2168855-96c1-45d7-9527-31917d808d5f is in state STARTED 2025-06-03 15:54:06.515558 | orchestrator | 2025-06-03 15:54:06 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:54:06.518507 | orchestrator | 2025-06-03 15:54:06 | INFO  | Task 98fbb740-60f5-495a-ae75-a50b82311151 is in state STARTED 2025-06-03 15:54:06.518554 | orchestrator | 2025-06-03 15:54:06 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:54:09.560140 | orchestrator | 2025-06-03 15:54:09 | INFO  | Task b2168855-96c1-45d7-9527-31917d808d5f is in state STARTED 2025-06-03 15:54:09.562268 | orchestrator | 2025-06-03 15:54:09 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:54:09.564503 | orchestrator | 2025-06-03 15:54:09 | INFO  | Task 98fbb740-60f5-495a-ae75-a50b82311151 is in state STARTED 2025-06-03 15:54:09.564527 | orchestrator | 2025-06-03 15:54:09 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:54:12.601379 | orchestrator | 2025-06-03 15:54:12 | INFO  | Task b2168855-96c1-45d7-9527-31917d808d5f is in state STARTED 2025-06-03 15:54:12.602352 | orchestrator | 2025-06-03 15:54:12 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:54:12.602402 | orchestrator | 2025-06-03 15:54:12 | INFO  | Task 98fbb740-60f5-495a-ae75-a50b82311151 is in state STARTED 2025-06-03 15:54:12.602442 | orchestrator | 2025-06-03 15:54:12 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:54:15.638604 | orchestrator | 2025-06-03 15:54:15 | INFO  | Task b2168855-96c1-45d7-9527-31917d808d5f is in state STARTED 2025-06-03 15:54:15.642174 | orchestrator | 2025-06-03 15:54:15 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:54:15.646386 | orchestrator | 2025-06-03 15:54:15 | INFO  | Task 98fbb740-60f5-495a-ae75-a50b82311151 is in state STARTED 2025-06-03 15:54:15.646451 | orchestrator | 2025-06-03 15:54:15 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:54:18.694376 | orchestrator | 2025-06-03 15:54:18 | INFO  | Task b2168855-96c1-45d7-9527-31917d808d5f is in state STARTED 2025-06-03 15:54:18.695137 | orchestrator | 2025-06-03 15:54:18 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:54:18.695978 | orchestrator | 2025-06-03 15:54:18 | INFO  | Task 98fbb740-60f5-495a-ae75-a50b82311151 is in state STARTED 2025-06-03 15:54:18.696144 | orchestrator | 2025-06-03 15:54:18 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:54:21.739319 | orchestrator | 2025-06-03 15:54:21 | INFO  | Task b2168855-96c1-45d7-9527-31917d808d5f is in state STARTED 2025-06-03 15:54:21.741122 | orchestrator | 2025-06-03 15:54:21 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:54:21.742644 | orchestrator | 2025-06-03 15:54:21 | INFO  | Task 98fbb740-60f5-495a-ae75-a50b82311151 is in state STARTED 2025-06-03 15:54:21.742700 | orchestrator | 2025-06-03 15:54:21 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:54:24.786074 | orchestrator | 2025-06-03 15:54:24 | INFO  | Task b2168855-96c1-45d7-9527-31917d808d5f is in state STARTED 2025-06-03 15:54:24.790142 | orchestrator | 2025-06-03 15:54:24 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:54:24.792115 | orchestrator | 2025-06-03 15:54:24 | INFO  | Task 98fbb740-60f5-495a-ae75-a50b82311151 is in state STARTED 2025-06-03 15:54:24.792186 | orchestrator | 2025-06-03 15:54:24 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:54:27.834683 | orchestrator | 2025-06-03 15:54:27 | INFO  | Task b2168855-96c1-45d7-9527-31917d808d5f is in state STARTED 2025-06-03 15:54:27.836640 | orchestrator | 2025-06-03 15:54:27 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:54:27.837430 | orchestrator | 2025-06-03 15:54:27 | INFO  | Task 98fbb740-60f5-495a-ae75-a50b82311151 is in state STARTED 2025-06-03 15:54:27.837478 | orchestrator | 2025-06-03 15:54:27 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:54:30.885221 | orchestrator | 2025-06-03 15:54:30 | INFO  | Task b2168855-96c1-45d7-9527-31917d808d5f is in state STARTED 2025-06-03 15:54:30.887041 | orchestrator | 2025-06-03 15:54:30 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:54:30.888185 | orchestrator | 2025-06-03 15:54:30 | INFO  | Task 98fbb740-60f5-495a-ae75-a50b82311151 is in state STARTED 2025-06-03 15:54:30.888263 | orchestrator | 2025-06-03 15:54:30 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:54:33.938247 | orchestrator | 2025-06-03 15:54:33 | INFO  | Task b2168855-96c1-45d7-9527-31917d808d5f is in state STARTED 2025-06-03 15:54:33.939886 | orchestrator | 2025-06-03 15:54:33 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:54:33.941931 | orchestrator | 2025-06-03 15:54:33 | INFO  | Task 98fbb740-60f5-495a-ae75-a50b82311151 is in state STARTED 2025-06-03 15:54:33.942163 | orchestrator | 2025-06-03 15:54:33 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:54:36.981135 | orchestrator | 2025-06-03 15:54:36 | INFO  | Task b2168855-96c1-45d7-9527-31917d808d5f is in state STARTED 2025-06-03 15:54:36.982171 | orchestrator | 2025-06-03 15:54:36 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:54:36.982846 | orchestrator | 2025-06-03 15:54:36 | INFO  | Task 98fbb740-60f5-495a-ae75-a50b82311151 is in state STARTED 2025-06-03 15:54:36.983535 | orchestrator | 2025-06-03 15:54:36 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:54:40.032940 | orchestrator | 2025-06-03 15:54:40 | INFO  | Task b2168855-96c1-45d7-9527-31917d808d5f is in state STARTED 2025-06-03 15:54:40.037799 | orchestrator | 2025-06-03 15:54:40 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:54:40.038859 | orchestrator | 2025-06-03 15:54:40 | INFO  | Task 98fbb740-60f5-495a-ae75-a50b82311151 is in state STARTED 2025-06-03 15:54:40.038904 | orchestrator | 2025-06-03 15:54:40 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:54:43.084187 | orchestrator | 2025-06-03 15:54:43 | INFO  | Task b2168855-96c1-45d7-9527-31917d808d5f is in state STARTED 2025-06-03 15:54:43.085988 | orchestrator | 2025-06-03 15:54:43 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:54:43.086586 | orchestrator | 2025-06-03 15:54:43 | INFO  | Task 98fbb740-60f5-495a-ae75-a50b82311151 is in state STARTED 2025-06-03 15:54:43.086880 | orchestrator | 2025-06-03 15:54:43 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:54:46.131472 | orchestrator | 2025-06-03 15:54:46 | INFO  | Task b2168855-96c1-45d7-9527-31917d808d5f is in state STARTED 2025-06-03 15:54:46.133529 | orchestrator | 2025-06-03 15:54:46 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:54:46.135549 | orchestrator | 2025-06-03 15:54:46 | INFO  | Task 98fbb740-60f5-495a-ae75-a50b82311151 is in state STARTED 2025-06-03 15:54:46.135589 | orchestrator | 2025-06-03 15:54:46 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:54:49.171298 | orchestrator | 2025-06-03 15:54:49 | INFO  | Task b2168855-96c1-45d7-9527-31917d808d5f is in state STARTED 2025-06-03 15:54:49.172319 | orchestrator | 2025-06-03 15:54:49 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:54:49.175527 | orchestrator | 2025-06-03 15:54:49 | INFO  | Task 98fbb740-60f5-495a-ae75-a50b82311151 is in state STARTED 2025-06-03 15:54:49.175569 | orchestrator | 2025-06-03 15:54:49 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:54:52.215976 | orchestrator | 2025-06-03 15:54:52 | INFO  | Task b2168855-96c1-45d7-9527-31917d808d5f is in state STARTED 2025-06-03 15:54:52.217247 | orchestrator | 2025-06-03 15:54:52 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:54:52.219246 | orchestrator | 2025-06-03 15:54:52 | INFO  | Task 98fbb740-60f5-495a-ae75-a50b82311151 is in state STARTED 2025-06-03 15:54:52.219352 | orchestrator | 2025-06-03 15:54:52 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:54:55.267597 | orchestrator | 2025-06-03 15:54:55 | INFO  | Task b2168855-96c1-45d7-9527-31917d808d5f is in state STARTED 2025-06-03 15:54:55.268634 | orchestrator | 2025-06-03 15:54:55 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:54:55.270376 | orchestrator | 2025-06-03 15:54:55 | INFO  | Task 98fbb740-60f5-495a-ae75-a50b82311151 is in state STARTED 2025-06-03 15:54:55.270434 | orchestrator | 2025-06-03 15:54:55 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:54:58.310103 | orchestrator | 2025-06-03 15:54:58 | INFO  | Task b2168855-96c1-45d7-9527-31917d808d5f is in state STARTED 2025-06-03 15:54:58.312804 | orchestrator | 2025-06-03 15:54:58 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:54:58.315162 | orchestrator | 2025-06-03 15:54:58 | INFO  | Task 98fbb740-60f5-495a-ae75-a50b82311151 is in state STARTED 2025-06-03 15:54:58.315199 | orchestrator | 2025-06-03 15:54:58 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:55:01.364310 | orchestrator | 2025-06-03 15:55:01 | INFO  | Task b2168855-96c1-45d7-9527-31917d808d5f is in state STARTED 2025-06-03 15:55:01.367101 | orchestrator | 2025-06-03 15:55:01 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:55:01.367183 | orchestrator | 2025-06-03 15:55:01 | INFO  | Task 98fbb740-60f5-495a-ae75-a50b82311151 is in state STARTED 2025-06-03 15:55:01.367202 | orchestrator | 2025-06-03 15:55:01 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:55:04.406565 | orchestrator | 2025-06-03 15:55:04 | INFO  | Task b2168855-96c1-45d7-9527-31917d808d5f is in state STARTED 2025-06-03 15:55:04.407938 | orchestrator | 2025-06-03 15:55:04 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:55:04.410130 | orchestrator | 2025-06-03 15:55:04 | INFO  | Task 98fbb740-60f5-495a-ae75-a50b82311151 is in state STARTED 2025-06-03 15:55:04.410183 | orchestrator | 2025-06-03 15:55:04 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:55:07.460616 | orchestrator | 2025-06-03 15:55:07 | INFO  | Task b2168855-96c1-45d7-9527-31917d808d5f is in state STARTED 2025-06-03 15:55:07.462096 | orchestrator | 2025-06-03 15:55:07 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:55:07.465487 | orchestrator | 2025-06-03 15:55:07 | INFO  | Task 98fbb740-60f5-495a-ae75-a50b82311151 is in state STARTED 2025-06-03 15:55:07.465534 | orchestrator | 2025-06-03 15:55:07 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:55:10.510571 | orchestrator | 2025-06-03 15:55:10 | INFO  | Task b2168855-96c1-45d7-9527-31917d808d5f is in state STARTED 2025-06-03 15:55:10.512177 | orchestrator | 2025-06-03 15:55:10 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:55:10.514624 | orchestrator | 2025-06-03 15:55:10 | INFO  | Task 98fbb740-60f5-495a-ae75-a50b82311151 is in state SUCCESS 2025-06-03 15:55:10.514686 | orchestrator | 2025-06-03 15:55:10 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:55:13.554054 | orchestrator | 2025-06-03 15:55:13 | INFO  | Task b2168855-96c1-45d7-9527-31917d808d5f is in state STARTED 2025-06-03 15:55:13.554151 | orchestrator | 2025-06-03 15:55:13 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:55:13.554162 | orchestrator | 2025-06-03 15:55:13 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:55:16.596979 | orchestrator | 2025-06-03 15:55:16 | INFO  | Task b2168855-96c1-45d7-9527-31917d808d5f is in state STARTED 2025-06-03 15:55:16.601041 | orchestrator | 2025-06-03 15:55:16 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:55:16.601148 | orchestrator | 2025-06-03 15:55:16 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:55:19.650741 | orchestrator | 2025-06-03 15:55:19 | INFO  | Task b2168855-96c1-45d7-9527-31917d808d5f is in state STARTED 2025-06-03 15:55:19.653603 | orchestrator | 2025-06-03 15:55:19 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:55:19.653835 | orchestrator | 2025-06-03 15:55:19 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:55:22.704820 | orchestrator | 2025-06-03 15:55:22 | INFO  | Task b2168855-96c1-45d7-9527-31917d808d5f is in state STARTED 2025-06-03 15:55:22.706220 | orchestrator | 2025-06-03 15:55:22 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:55:22.706283 | orchestrator | 2025-06-03 15:55:22 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:55:25.744514 | orchestrator | 2025-06-03 15:55:25 | INFO  | Task b2168855-96c1-45d7-9527-31917d808d5f is in state STARTED 2025-06-03 15:55:25.746188 | orchestrator | 2025-06-03 15:55:25 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:55:25.746281 | orchestrator | 2025-06-03 15:55:25 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:55:28.786871 | orchestrator | 2025-06-03 15:55:28 | INFO  | Task b2168855-96c1-45d7-9527-31917d808d5f is in state STARTED 2025-06-03 15:55:28.787240 | orchestrator | 2025-06-03 15:55:28 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:55:28.787276 | orchestrator | 2025-06-03 15:55:28 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:55:31.834568 | orchestrator | 2025-06-03 15:55:31 | INFO  | Task b2168855-96c1-45d7-9527-31917d808d5f is in state STARTED 2025-06-03 15:55:31.835114 | orchestrator | 2025-06-03 15:55:31 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:55:31.835244 | orchestrator | 2025-06-03 15:55:31 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:55:34.876598 | orchestrator | 2025-06-03 15:55:34 | INFO  | Task b2168855-96c1-45d7-9527-31917d808d5f is in state STARTED 2025-06-03 15:55:34.877701 | orchestrator | 2025-06-03 15:55:34 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:55:34.877782 | orchestrator | 2025-06-03 15:55:34 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:55:37.928271 | orchestrator | 2025-06-03 15:55:37 | INFO  | Task b2168855-96c1-45d7-9527-31917d808d5f is in state STARTED 2025-06-03 15:55:37.930141 | orchestrator | 2025-06-03 15:55:37 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:55:37.930186 | orchestrator | 2025-06-03 15:55:37 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:55:40.988191 | orchestrator | 2025-06-03 15:55:40 | INFO  | Task b2168855-96c1-45d7-9527-31917d808d5f is in state STARTED 2025-06-03 15:55:40.989563 | orchestrator | 2025-06-03 15:55:40 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:55:40.989712 | orchestrator | 2025-06-03 15:55:40 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:55:44.042699 | orchestrator | 2025-06-03 15:55:44 | INFO  | Task b2168855-96c1-45d7-9527-31917d808d5f is in state SUCCESS 2025-06-03 15:55:44.045061 | orchestrator | 2025-06-03 15:55:44.045135 | orchestrator | 2025-06-03 15:55:44.045144 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-03 15:55:44.045153 | orchestrator | 2025-06-03 15:55:44.045160 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-03 15:55:44.045192 | orchestrator | Tuesday 03 June 2025 15:51:05 +0000 (0:00:00.189) 0:00:00.189 ********** 2025-06-03 15:55:44.045197 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:55:44.045202 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:55:44.045206 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:55:44.045210 | orchestrator | 2025-06-03 15:55:44.045215 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-03 15:55:44.045219 | orchestrator | Tuesday 03 June 2025 15:51:06 +0000 (0:00:00.347) 0:00:00.537 ********** 2025-06-03 15:55:44.045223 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2025-06-03 15:55:44.045228 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2025-06-03 15:55:44.045232 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2025-06-03 15:55:44.045236 | orchestrator | 2025-06-03 15:55:44.045240 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2025-06-03 15:55:44.045244 | orchestrator | 2025-06-03 15:55:44.045248 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2025-06-03 15:55:44.045253 | orchestrator | Tuesday 03 June 2025 15:51:07 +0000 (0:00:01.314) 0:00:01.851 ********** 2025-06-03 15:55:44.045256 | orchestrator | 2025-06-03 15:55:44.045260 | orchestrator | STILL ALIVE [task 'Waiting for Nova public port to be UP' is running] ********** 2025-06-03 15:55:44.045264 | orchestrator | 2025-06-03 15:55:44.045268 | orchestrator | STILL ALIVE [task 'Waiting for Nova public port to be UP' is running] ********** 2025-06-03 15:55:44.045272 | orchestrator | 2025-06-03 15:55:44.045276 | orchestrator | STILL ALIVE [task 'Waiting for Nova public port to be UP' is running] ********** 2025-06-03 15:55:44.045280 | orchestrator | 2025-06-03 15:55:44.045284 | orchestrator | STILL ALIVE [task 'Waiting for Nova public port to be UP' is running] ********** 2025-06-03 15:55:44.045288 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:55:44.045292 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:55:44.045295 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:55:44.045299 | orchestrator | 2025-06-03 15:55:44.045303 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-03 15:55:44.045308 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-03 15:55:44.045315 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-03 15:55:44.045319 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-03 15:55:44.045323 | orchestrator | 2025-06-03 15:55:44.045326 | orchestrator | 2025-06-03 15:55:44.045330 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-03 15:55:44.045334 | orchestrator | Tuesday 03 June 2025 15:55:09 +0000 (0:04:02.152) 0:04:04.004 ********** 2025-06-03 15:55:44.045338 | orchestrator | =============================================================================== 2025-06-03 15:55:44.045342 | orchestrator | Waiting for Nova public port to be UP --------------------------------- 242.15s 2025-06-03 15:55:44.045346 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.31s 2025-06-03 15:55:44.045350 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.35s 2025-06-03 15:55:44.045354 | orchestrator | 2025-06-03 15:55:44.045359 | orchestrator | 2025-06-03 15:55:44.045363 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-03 15:55:44.045367 | orchestrator | 2025-06-03 15:55:44.045370 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-03 15:55:44.045374 | orchestrator | Tuesday 03 June 2025 15:53:30 +0000 (0:00:00.331) 0:00:00.331 ********** 2025-06-03 15:55:44.045378 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:55:44.045382 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:55:44.045386 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:55:44.045390 | orchestrator | 2025-06-03 15:55:44.045394 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-03 15:55:44.045398 | orchestrator | Tuesday 03 June 2025 15:53:30 +0000 (0:00:00.391) 0:00:00.723 ********** 2025-06-03 15:55:44.045406 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2025-06-03 15:55:44.045410 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2025-06-03 15:55:44.045414 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2025-06-03 15:55:44.045516 | orchestrator | 2025-06-03 15:55:44.045521 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2025-06-03 15:55:44.045527 | orchestrator | 2025-06-03 15:55:44.045533 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-06-03 15:55:44.045539 | orchestrator | Tuesday 03 June 2025 15:53:31 +0000 (0:00:00.486) 0:00:01.210 ********** 2025-06-03 15:55:44.045544 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:55:44.045549 | orchestrator | 2025-06-03 15:55:44.045556 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2025-06-03 15:55:44.045595 | orchestrator | Tuesday 03 June 2025 15:53:31 +0000 (0:00:00.574) 0:00:01.784 ********** 2025-06-03 15:55:44.045645 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-03 15:55:44.045655 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-03 15:55:44.045663 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-03 15:55:44.045670 | orchestrator | 2025-06-03 15:55:44.045676 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2025-06-03 15:55:44.045682 | orchestrator | Tuesday 03 June 2025 15:53:32 +0000 (0:00:00.719) 0:00:02.503 ********** 2025-06-03 15:55:44.045687 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2025-06-03 15:55:44.045694 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2025-06-03 15:55:44.045700 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-03 15:55:44.045706 | orchestrator | 2025-06-03 15:55:44.045712 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-06-03 15:55:44.045718 | orchestrator | Tuesday 03 June 2025 15:53:33 +0000 (0:00:00.771) 0:00:03.274 ********** 2025-06-03 15:55:44.045724 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:55:44.045736 | orchestrator | 2025-06-03 15:55:44.045742 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2025-06-03 15:55:44.045747 | orchestrator | Tuesday 03 June 2025 15:53:34 +0000 (0:00:00.616) 0:00:03.891 ********** 2025-06-03 15:55:44.045753 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-03 15:55:44.045760 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-03 15:55:44.045773 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-03 15:55:44.045780 | orchestrator | 2025-06-03 15:55:44.045786 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2025-06-03 15:55:44.045793 | orchestrator | Tuesday 03 June 2025 15:53:35 +0000 (0:00:01.296) 0:00:05.187 ********** 2025-06-03 15:55:44.045799 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-03 15:55:44.045805 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:55:44.045812 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-03 15:55:44.045824 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:55:44.045831 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-03 15:55:44.045837 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:55:44.045844 | orchestrator | 2025-06-03 15:55:44.045850 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2025-06-03 15:55:44.045858 | orchestrator | Tuesday 03 June 2025 15:53:35 +0000 (0:00:00.373) 0:00:05.561 ********** 2025-06-03 15:55:44.045862 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-03 15:55:44.045866 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-03 15:55:44.045871 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:55:44.045875 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:55:44.045884 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-03 15:55:44.045888 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:55:44.045892 | orchestrator | 2025-06-03 15:55:44.045896 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2025-06-03 15:55:44.045900 | orchestrator | Tuesday 03 June 2025 15:53:36 +0000 (0:00:00.696) 0:00:06.258 ********** 2025-06-03 15:55:44.045904 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-03 15:55:44.045913 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-03 15:55:44.045917 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-03 15:55:44.045921 | orchestrator | 2025-06-03 15:55:44.045925 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2025-06-03 15:55:44.045929 | orchestrator | Tuesday 03 June 2025 15:53:37 +0000 (0:00:01.210) 0:00:07.468 ********** 2025-06-03 15:55:44.045933 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-03 15:55:44.045942 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-03 15:55:44.045947 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-03 15:55:44.045951 | orchestrator | 2025-06-03 15:55:44.045955 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2025-06-03 15:55:44.045963 | orchestrator | Tuesday 03 June 2025 15:53:38 +0000 (0:00:01.284) 0:00:08.752 ********** 2025-06-03 15:55:44.045968 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:55:44.045971 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:55:44.045975 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:55:44.045979 | orchestrator | 2025-06-03 15:55:44.045983 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2025-06-03 15:55:44.045987 | orchestrator | Tuesday 03 June 2025 15:53:39 +0000 (0:00:00.594) 0:00:09.346 ********** 2025-06-03 15:55:44.045991 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-06-03 15:55:44.045995 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-06-03 15:55:44.045999 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-06-03 15:55:44.046003 | orchestrator | 2025-06-03 15:55:44.046007 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2025-06-03 15:55:44.046011 | orchestrator | Tuesday 03 June 2025 15:53:40 +0000 (0:00:01.292) 0:00:10.639 ********** 2025-06-03 15:55:44.046067 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-06-03 15:55:44.046075 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-06-03 15:55:44.046081 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-06-03 15:55:44.046087 | orchestrator | 2025-06-03 15:55:44.046092 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2025-06-03 15:55:44.046097 | orchestrator | Tuesday 03 June 2025 15:53:41 +0000 (0:00:01.212) 0:00:11.851 ********** 2025-06-03 15:55:44.046103 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-03 15:55:44.046109 | orchestrator | 2025-06-03 15:55:44.046116 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2025-06-03 15:55:44.046121 | orchestrator | Tuesday 03 June 2025 15:53:42 +0000 (0:00:00.671) 0:00:12.523 ********** 2025-06-03 15:55:44.046127 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2025-06-03 15:55:44.046133 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2025-06-03 15:55:44.046139 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:55:44.046145 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:55:44.046151 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:55:44.046157 | orchestrator | 2025-06-03 15:55:44.046162 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2025-06-03 15:55:44.046169 | orchestrator | Tuesday 03 June 2025 15:53:43 +0000 (0:00:00.693) 0:00:13.216 ********** 2025-06-03 15:55:44.046175 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:55:44.046181 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:55:44.046187 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:55:44.046193 | orchestrator | 2025-06-03 15:55:44.046199 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2025-06-03 15:55:44.046205 | orchestrator | Tuesday 03 June 2025 15:53:43 +0000 (0:00:00.421) 0:00:13.638 ********** 2025-06-03 15:55:44.046214 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1311639, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.1998358, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:55:44.046228 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1311639, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.1998358, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:55:44.046242 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1311639, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.1998358, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:55:44.046249 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1311633, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.1918356, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:55:44.046256 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1311633, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.1918356, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:55:44.046263 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1311633, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.1918356, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:55:44.046270 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1311629, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.1878357, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:55:44.046281 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1311629, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.1878357, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:55:44.046294 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1311629, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.1878357, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:55:44.046300 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1311637, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.1938357, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:55:44.046307 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1311637, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.1938357, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:55:44.046314 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1311637, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.1938357, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:55:44.046320 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1311623, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.1838355, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:55:44.046341 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1311623, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.1838355, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:55:44.046373 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1311623, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.1838355, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:55:44.046380 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1311631, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.1888356, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:55:44.046387 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1311631, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.1888356, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:55:44.046393 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1311631, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.1888356, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:55:44.046399 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1311636, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.1918356, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:55:44.046406 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1311636, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.1918356, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:55:44.046430 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1311636, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.1918356, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:55:44.046519 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1311620, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.1828356, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:55:44.046527 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1311620, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.1828356, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:55:44.046533 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1311620, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.1828356, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:55:44.046539 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1311613, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.1768355, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:55:44.046547 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1311613, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.1768355, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:55:44.046570 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1311613, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.1768355, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:55:44.046578 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1311625, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.1848354, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:55:44.046585 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1311625, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.1848354, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:55:44.046592 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1311625, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.1848354, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:55:44.046598 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1311617, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.1808355, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:55:44.046606 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1311617, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.1808355, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:55:44.046673 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1311617, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.1808355, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:55:44.046691 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1311634, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.1918356, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:55:44.046699 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1311634, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.1918356, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:55:44.046705 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1311634, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.1918356, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:55:44.046712 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1311627, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.1868355, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:55:44.046719 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1311627, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.1868355, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:55:44.046731 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1311627, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.1868355, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:55:44.046750 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1311638, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.1968358, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:55:44.046757 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1311638, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.1968358, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:55:44.046764 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1311638, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.1968358, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:55:44.046771 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1311619, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.1828356, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:55:44.046777 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1311619, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.1828356, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:55:44.046789 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1311619, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.1828356, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:55:44.046800 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1311632, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.1898355, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:55:44.046813 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1311632, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.1898355, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:55:44.046821 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1311632, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.1898355, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:55:44.046828 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1311614, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.1798356, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:55:44.046834 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1311614, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.1798356, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:55:44.046848 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1311614, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.1798356, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:55:44.046855 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1311618, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.1818354, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:55:44.046869 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1311618, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.1818354, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:55:44.046877 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1311618, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.1818354, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:55:44.046883 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1311628, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.1868355, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:55:44.046891 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1311628, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.1868355, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:55:44.046905 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1311628, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.1868355, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:55:44.046913 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1311665, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.2238362, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:55:44.046930 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1311665, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.2238362, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:55:44.046938 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1311665, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.2238362, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:55:44.046945 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1311656, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.212836, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:55:44.046952 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1311656, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.212836, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:55:44.046963 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1311642, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.2008357, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:55:44.046970 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1311656, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.212836, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:55:44.046984 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1311642, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.2008357, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:55:44.046990 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1311692, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.2368364, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:55:44.046998 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1311692, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.2368364, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:55:44.047005 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1311642, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.2008357, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:55:44.047016 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1311643, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.2008357, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:55:44.047023 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1311643, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.2008357, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:55:44.047268 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1311692, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.2368364, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:55:44.047292 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1311685, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.2328362, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:55:44.047301 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1311685, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.2328362, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:55:44.047308 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1311643, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.2008357, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:55:44.047324 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1311693, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.2408364, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:55:44.047331 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1311693, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.2408364, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:55:44.047344 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1311685, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.2328362, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:55:44.047359 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1311676, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.2278361, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:55:44.047366 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1311676, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.2278361, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:55:44.047373 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1311693, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.2408364, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:55:44.047385 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1311680, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.2308362, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:55:44.047392 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1311680, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.2308362, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:55:44.047402 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1311676, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.2278361, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:55:44.047413 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1311644, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.2018359, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:55:44.047420 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1311644, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.2018359, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:55:44.047426 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1311680, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.2308362, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:55:44.047438 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1311657, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.214836, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:55:44.047444 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1311657, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.214836, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:55:44.047450 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1311644, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.2018359, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:55:44.047466 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1311694, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.2428365, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:55:44.047472 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1311694, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.2428365, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:55:44.047480 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1311657, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.214836, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:55:44.047491 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1311687, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.2348363, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:55:44.047496 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1311687, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.2348363, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:55:44.047500 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1311694, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.2428365, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:55:44.047511 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1311649, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.2058358, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:55:44.047516 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1311649, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.2058358, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:55:44.047520 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1311687, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.2348363, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:55:44.047528 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1311646, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.2038357, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:55:44.047532 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1311646, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.2038357, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:55:44.047536 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1311649, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.2058358, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:55:44.047546 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1311651, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.206836, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:55:44.047550 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1311651, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.206836, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:55:44.047554 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1311646, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.2038357, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:55:44.047563 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1311654, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.2118359, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:55:44.047567 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1311654, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.2118359, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:55:44.047571 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1311651, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.206836, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:55:44.047575 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1311660, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.215836, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:55:44.047585 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1311660, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.215836, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:55:44.047590 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1311654, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.2118359, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:55:44.047599 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1311678, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.2288363, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:55:44.047604 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1311678, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.2288363, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:55:44.047608 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1311662, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.215836, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:55:44.047657 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1311660, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.215836, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:55:44.047668 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1311662, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.215836, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:55:44.047673 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1311698, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.2498364, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:55:44.047681 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1311698, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.2498364, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:55:44.047686 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1311678, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.2288363, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:55:44.047690 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1311662, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.215836, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:55:44.047694 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1311698, 'dev': 108, 'nlink': 1, 'atime': 1748936962.0, 'mtime': 1748936962.0, 'ctime': 1748963128.2498364, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-03 15:55:44.047698 | orchestrator | 2025-06-03 15:55:44.047703 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2025-06-03 15:55:44.047708 | orchestrator | Tuesday 03 June 2025 15:54:21 +0000 (0:00:37.992) 0:00:51.631 ********** 2025-06-03 15:55:44.047720 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-03 15:55:44.047724 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-03 15:55:44.047733 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-03 15:55:44.047737 | orchestrator | 2025-06-03 15:55:44.047741 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2025-06-03 15:55:44.047746 | orchestrator | Tuesday 03 June 2025 15:54:22 +0000 (0:00:00.990) 0:00:52.621 ********** 2025-06-03 15:55:44.047750 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:55:44.047754 | orchestrator | 2025-06-03 15:55:44.047758 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2025-06-03 15:55:44.047762 | orchestrator | Tuesday 03 June 2025 15:54:25 +0000 (0:00:02.447) 0:00:55.068 ********** 2025-06-03 15:55:44.047766 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:55:44.047770 | orchestrator | 2025-06-03 15:55:44.047774 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-06-03 15:55:44.047778 | orchestrator | Tuesday 03 June 2025 15:54:27 +0000 (0:00:02.315) 0:00:57.383 ********** 2025-06-03 15:55:44.047781 | orchestrator | 2025-06-03 15:55:44.047786 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-06-03 15:55:44.047790 | orchestrator | Tuesday 03 June 2025 15:54:27 +0000 (0:00:00.230) 0:00:57.614 ********** 2025-06-03 15:55:44.047794 | orchestrator | 2025-06-03 15:55:44.047798 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-06-03 15:55:44.047802 | orchestrator | Tuesday 03 June 2025 15:54:27 +0000 (0:00:00.062) 0:00:57.677 ********** 2025-06-03 15:55:44.047805 | orchestrator | 2025-06-03 15:55:44.047810 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2025-06-03 15:55:44.047814 | orchestrator | Tuesday 03 June 2025 15:54:27 +0000 (0:00:00.072) 0:00:57.749 ********** 2025-06-03 15:55:44.047818 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:55:44.047822 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:55:44.047826 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:55:44.047829 | orchestrator | 2025-06-03 15:55:44.047834 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2025-06-03 15:55:44.047838 | orchestrator | Tuesday 03 June 2025 15:54:34 +0000 (0:00:06.809) 0:01:04.558 ********** 2025-06-03 15:55:44.047842 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:55:44.047846 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:55:44.047849 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2025-06-03 15:55:44.047855 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2025-06-03 15:55:44.047859 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (10 retries left). 2025-06-03 15:55:44.047863 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:55:44.047868 | orchestrator | 2025-06-03 15:55:44.047877 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2025-06-03 15:55:44.047882 | orchestrator | Tuesday 03 June 2025 15:55:13 +0000 (0:00:38.696) 0:01:43.255 ********** 2025-06-03 15:55:44.047886 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:55:44.047891 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:55:44.047895 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:55:44.047900 | orchestrator | 2025-06-03 15:55:44.047904 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2025-06-03 15:55:44.047909 | orchestrator | Tuesday 03 June 2025 15:55:37 +0000 (0:00:23.703) 0:02:06.958 ********** 2025-06-03 15:55:44.047913 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:55:44.047918 | orchestrator | 2025-06-03 15:55:44.047925 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2025-06-03 15:55:44.047930 | orchestrator | Tuesday 03 June 2025 15:55:39 +0000 (0:00:02.339) 0:02:09.297 ********** 2025-06-03 15:55:44.047937 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:55:44.047942 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:55:44.047946 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:55:44.047951 | orchestrator | 2025-06-03 15:55:44.047955 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2025-06-03 15:55:44.047960 | orchestrator | Tuesday 03 June 2025 15:55:39 +0000 (0:00:00.291) 0:02:09.589 ********** 2025-06-03 15:55:44.047965 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2025-06-03 15:55:44.047971 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2025-06-03 15:55:44.047977 | orchestrator | 2025-06-03 15:55:44.047982 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2025-06-03 15:55:44.047986 | orchestrator | Tuesday 03 June 2025 15:55:42 +0000 (0:00:02.316) 0:02:11.905 ********** 2025-06-03 15:55:44.047991 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:55:44.047995 | orchestrator | 2025-06-03 15:55:44.048000 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-03 15:55:44.048004 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-06-03 15:55:44.048010 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-06-03 15:55:44.048015 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-06-03 15:55:44.048019 | orchestrator | 2025-06-03 15:55:44.048024 | orchestrator | 2025-06-03 15:55:44.048028 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-03 15:55:44.048033 | orchestrator | Tuesday 03 June 2025 15:55:42 +0000 (0:00:00.251) 0:02:12.156 ********** 2025-06-03 15:55:44.048038 | orchestrator | =============================================================================== 2025-06-03 15:55:44.048042 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 38.70s 2025-06-03 15:55:44.048047 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 37.99s 2025-06-03 15:55:44.048051 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 23.70s 2025-06-03 15:55:44.048056 | orchestrator | grafana : Restart first grafana container ------------------------------- 6.81s 2025-06-03 15:55:44.048060 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.45s 2025-06-03 15:55:44.048068 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.34s 2025-06-03 15:55:44.048073 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.32s 2025-06-03 15:55:44.048077 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.32s 2025-06-03 15:55:44.048082 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.30s 2025-06-03 15:55:44.048087 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.29s 2025-06-03 15:55:44.048091 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.28s 2025-06-03 15:55:44.048096 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.21s 2025-06-03 15:55:44.048100 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.21s 2025-06-03 15:55:44.048104 | orchestrator | grafana : Check grafana containers -------------------------------------- 0.99s 2025-06-03 15:55:44.048108 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.77s 2025-06-03 15:55:44.048112 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.72s 2025-06-03 15:55:44.048115 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.70s 2025-06-03 15:55:44.048119 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.69s 2025-06-03 15:55:44.048123 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 0.67s 2025-06-03 15:55:44.048127 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.62s 2025-06-03 15:55:44.048776 | orchestrator | 2025-06-03 15:55:44 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:55:44.048818 | orchestrator | 2025-06-03 15:55:44 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:55:47.104012 | orchestrator | 2025-06-03 15:55:47 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:55:47.104136 | orchestrator | 2025-06-03 15:55:47 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:55:50.145572 | orchestrator | 2025-06-03 15:55:50 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:55:50.145710 | orchestrator | 2025-06-03 15:55:50 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:55:53.190283 | orchestrator | 2025-06-03 15:55:53 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:55:53.190389 | orchestrator | 2025-06-03 15:55:53 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:55:56.231053 | orchestrator | 2025-06-03 15:55:56 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:55:56.233245 | orchestrator | 2025-06-03 15:55:56 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:55:59.274450 | orchestrator | 2025-06-03 15:55:59 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:55:59.274549 | orchestrator | 2025-06-03 15:55:59 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:56:02.316431 | orchestrator | 2025-06-03 15:56:02 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:56:02.316546 | orchestrator | 2025-06-03 15:56:02 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:56:05.360331 | orchestrator | 2025-06-03 15:56:05 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:56:05.360432 | orchestrator | 2025-06-03 15:56:05 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:56:08.400175 | orchestrator | 2025-06-03 15:56:08 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:56:08.400270 | orchestrator | 2025-06-03 15:56:08 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:56:11.444541 | orchestrator | 2025-06-03 15:56:11 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:56:11.444686 | orchestrator | 2025-06-03 15:56:11 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:56:14.481970 | orchestrator | 2025-06-03 15:56:14 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:56:14.482108 | orchestrator | 2025-06-03 15:56:14 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:56:17.528187 | orchestrator | 2025-06-03 15:56:17 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:56:17.528319 | orchestrator | 2025-06-03 15:56:17 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:56:20.561071 | orchestrator | 2025-06-03 15:56:20 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:56:20.561174 | orchestrator | 2025-06-03 15:56:20 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:56:23.601870 | orchestrator | 2025-06-03 15:56:23 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:56:23.601940 | orchestrator | 2025-06-03 15:56:23 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:56:26.641512 | orchestrator | 2025-06-03 15:56:26 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:56:26.641632 | orchestrator | 2025-06-03 15:56:26 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:56:29.686414 | orchestrator | 2025-06-03 15:56:29 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:56:29.686499 | orchestrator | 2025-06-03 15:56:29 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:56:32.730430 | orchestrator | 2025-06-03 15:56:32 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:56:32.730525 | orchestrator | 2025-06-03 15:56:32 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:56:35.778694 | orchestrator | 2025-06-03 15:56:35 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:56:35.778802 | orchestrator | 2025-06-03 15:56:35 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:56:38.824395 | orchestrator | 2025-06-03 15:56:38 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:56:38.824685 | orchestrator | 2025-06-03 15:56:38 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:56:41.877392 | orchestrator | 2025-06-03 15:56:41 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:56:41.878724 | orchestrator | 2025-06-03 15:56:41 | INFO  | Task 1667b837-88b3-4a66-8c9f-c398781332b0 is in state STARTED 2025-06-03 15:56:41.878905 | orchestrator | 2025-06-03 15:56:41 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:56:44.926708 | orchestrator | 2025-06-03 15:56:44 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:56:44.928927 | orchestrator | 2025-06-03 15:56:44 | INFO  | Task 1667b837-88b3-4a66-8c9f-c398781332b0 is in state STARTED 2025-06-03 15:56:44.929727 | orchestrator | 2025-06-03 15:56:44 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:56:47.970864 | orchestrator | 2025-06-03 15:56:47 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:56:47.971398 | orchestrator | 2025-06-03 15:56:47 | INFO  | Task 1667b837-88b3-4a66-8c9f-c398781332b0 is in state STARTED 2025-06-03 15:56:47.971423 | orchestrator | 2025-06-03 15:56:47 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:56:51.029478 | orchestrator | 2025-06-03 15:56:51 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:56:51.031214 | orchestrator | 2025-06-03 15:56:51 | INFO  | Task 1667b837-88b3-4a66-8c9f-c398781332b0 is in state STARTED 2025-06-03 15:56:51.031310 | orchestrator | 2025-06-03 15:56:51 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:56:54.082750 | orchestrator | 2025-06-03 15:56:54 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:56:54.085801 | orchestrator | 2025-06-03 15:56:54 | INFO  | Task 1667b837-88b3-4a66-8c9f-c398781332b0 is in state STARTED 2025-06-03 15:56:54.085914 | orchestrator | 2025-06-03 15:56:54 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:56:57.120283 | orchestrator | 2025-06-03 15:56:57 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:56:57.121351 | orchestrator | 2025-06-03 15:56:57 | INFO  | Task 1667b837-88b3-4a66-8c9f-c398781332b0 is in state SUCCESS 2025-06-03 15:56:57.122049 | orchestrator | 2025-06-03 15:56:57 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:57:00.162062 | orchestrator | 2025-06-03 15:57:00 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:57:00.162167 | orchestrator | 2025-06-03 15:57:00 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:57:03.219307 | orchestrator | 2025-06-03 15:57:03 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:57:03.219439 | orchestrator | 2025-06-03 15:57:03 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:57:06.271958 | orchestrator | 2025-06-03 15:57:06 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:57:06.272031 | orchestrator | 2025-06-03 15:57:06 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:57:09.319239 | orchestrator | 2025-06-03 15:57:09 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:57:09.319318 | orchestrator | 2025-06-03 15:57:09 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:57:12.371471 | orchestrator | 2025-06-03 15:57:12 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:57:12.371575 | orchestrator | 2025-06-03 15:57:12 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:57:15.419050 | orchestrator | 2025-06-03 15:57:15 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:57:15.419134 | orchestrator | 2025-06-03 15:57:15 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:57:18.464180 | orchestrator | 2025-06-03 15:57:18 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:57:18.464302 | orchestrator | 2025-06-03 15:57:18 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:57:21.511484 | orchestrator | 2025-06-03 15:57:21 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:57:21.511721 | orchestrator | 2025-06-03 15:57:21 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:57:24.558110 | orchestrator | 2025-06-03 15:57:24 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:57:24.558193 | orchestrator | 2025-06-03 15:57:24 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:57:27.597330 | orchestrator | 2025-06-03 15:57:27 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:57:27.597414 | orchestrator | 2025-06-03 15:57:27 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:57:30.647241 | orchestrator | 2025-06-03 15:57:30 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:57:30.647332 | orchestrator | 2025-06-03 15:57:30 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:57:33.690389 | orchestrator | 2025-06-03 15:57:33 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:57:33.690564 | orchestrator | 2025-06-03 15:57:33 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:57:36.736590 | orchestrator | 2025-06-03 15:57:36 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:57:36.736687 | orchestrator | 2025-06-03 15:57:36 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:57:39.781933 | orchestrator | 2025-06-03 15:57:39 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:57:39.782134 | orchestrator | 2025-06-03 15:57:39 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:57:42.827039 | orchestrator | 2025-06-03 15:57:42 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:57:42.827123 | orchestrator | 2025-06-03 15:57:42 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:57:45.870242 | orchestrator | 2025-06-03 15:57:45 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:57:45.870347 | orchestrator | 2025-06-03 15:57:45 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:57:48.920726 | orchestrator | 2025-06-03 15:57:48 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:57:48.920802 | orchestrator | 2025-06-03 15:57:48 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:57:51.965979 | orchestrator | 2025-06-03 15:57:51 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:57:51.966146 | orchestrator | 2025-06-03 15:57:51 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:57:55.012398 | orchestrator | 2025-06-03 15:57:55 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:57:55.012513 | orchestrator | 2025-06-03 15:57:55 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:57:58.052060 | orchestrator | 2025-06-03 15:57:58 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:57:58.052175 | orchestrator | 2025-06-03 15:57:58 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:58:01.090692 | orchestrator | 2025-06-03 15:58:01 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:58:01.090834 | orchestrator | 2025-06-03 15:58:01 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:58:04.140531 | orchestrator | 2025-06-03 15:58:04 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:58:04.140626 | orchestrator | 2025-06-03 15:58:04 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:58:07.174911 | orchestrator | 2025-06-03 15:58:07 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:58:07.175016 | orchestrator | 2025-06-03 15:58:07 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:58:10.217241 | orchestrator | 2025-06-03 15:58:10 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:58:10.217306 | orchestrator | 2025-06-03 15:58:10 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:58:13.260103 | orchestrator | 2025-06-03 15:58:13 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:58:13.260190 | orchestrator | 2025-06-03 15:58:13 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:58:16.306078 | orchestrator | 2025-06-03 15:58:16 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:58:16.306201 | orchestrator | 2025-06-03 15:58:16 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:58:19.358366 | orchestrator | 2025-06-03 15:58:19 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:58:19.358512 | orchestrator | 2025-06-03 15:58:19 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:58:22.392778 | orchestrator | 2025-06-03 15:58:22 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:58:22.392882 | orchestrator | 2025-06-03 15:58:22 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:58:25.433586 | orchestrator | 2025-06-03 15:58:25 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:58:25.433730 | orchestrator | 2025-06-03 15:58:25 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:58:28.478464 | orchestrator | 2025-06-03 15:58:28 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:58:28.478553 | orchestrator | 2025-06-03 15:58:28 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:58:31.519722 | orchestrator | 2025-06-03 15:58:31 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:58:31.519843 | orchestrator | 2025-06-03 15:58:31 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:58:34.565063 | orchestrator | 2025-06-03 15:58:34 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:58:34.565185 | orchestrator | 2025-06-03 15:58:34 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:58:37.605801 | orchestrator | 2025-06-03 15:58:37 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:58:37.605844 | orchestrator | 2025-06-03 15:58:37 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:58:40.652213 | orchestrator | 2025-06-03 15:58:40 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:58:40.652386 | orchestrator | 2025-06-03 15:58:40 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:58:43.696776 | orchestrator | 2025-06-03 15:58:43 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:58:43.696897 | orchestrator | 2025-06-03 15:58:43 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:58:46.730341 | orchestrator | 2025-06-03 15:58:46 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:58:46.730495 | orchestrator | 2025-06-03 15:58:46 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:58:49.758108 | orchestrator | 2025-06-03 15:58:49 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:58:49.758262 | orchestrator | 2025-06-03 15:58:49 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:58:52.788882 | orchestrator | 2025-06-03 15:58:52 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:58:52.788989 | orchestrator | 2025-06-03 15:58:52 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:58:55.819782 | orchestrator | 2025-06-03 15:58:55 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:58:55.819873 | orchestrator | 2025-06-03 15:58:55 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:58:58.865202 | orchestrator | 2025-06-03 15:58:58 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:58:58.865329 | orchestrator | 2025-06-03 15:58:58 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:59:01.908270 | orchestrator | 2025-06-03 15:59:01 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:59:01.908476 | orchestrator | 2025-06-03 15:59:01 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:59:04.953685 | orchestrator | 2025-06-03 15:59:04 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:59:04.953763 | orchestrator | 2025-06-03 15:59:04 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:59:08.003980 | orchestrator | 2025-06-03 15:59:08 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:59:08.004086 | orchestrator | 2025-06-03 15:59:08 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:59:11.056119 | orchestrator | 2025-06-03 15:59:11 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:59:11.056213 | orchestrator | 2025-06-03 15:59:11 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:59:14.100462 | orchestrator | 2025-06-03 15:59:14 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:59:14.100546 | orchestrator | 2025-06-03 15:59:14 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:59:17.144340 | orchestrator | 2025-06-03 15:59:17 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:59:17.144538 | orchestrator | 2025-06-03 15:59:17 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:59:20.183836 | orchestrator | 2025-06-03 15:59:20 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:59:20.183916 | orchestrator | 2025-06-03 15:59:20 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:59:23.233517 | orchestrator | 2025-06-03 15:59:23 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:59:23.233621 | orchestrator | 2025-06-03 15:59:23 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:59:26.276955 | orchestrator | 2025-06-03 15:59:26 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state STARTED 2025-06-03 15:59:26.277062 | orchestrator | 2025-06-03 15:59:26 | INFO  | Wait 1 second(s) until the next check 2025-06-03 15:59:29.314700 | orchestrator | 2025-06-03 15:59:29 | INFO  | Task 9a14bec6-12b7-4721-b68b-121f9f3acba2 is in state SUCCESS 2025-06-03 15:59:29.315614 | orchestrator | 2025-06-03 15:59:29.315664 | orchestrator | None 2025-06-03 15:59:29.315677 | orchestrator | 2025-06-03 15:59:29.315705 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-03 15:59:29.315718 | orchestrator | 2025-06-03 15:59:29.315803 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2025-06-03 15:59:29.315816 | orchestrator | Tuesday 03 June 2025 15:50:54 +0000 (0:00:00.493) 0:00:00.493 ********** 2025-06-03 15:59:29.315827 | orchestrator | changed: [testbed-manager] 2025-06-03 15:59:29.315839 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:59:29.315850 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:59:29.315860 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:59:29.315871 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:59:29.315882 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:59:29.315893 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:59:29.315904 | orchestrator | 2025-06-03 15:59:29.316001 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-03 15:59:29.316022 | orchestrator | Tuesday 03 June 2025 15:50:55 +0000 (0:00:00.936) 0:00:01.430 ********** 2025-06-03 15:59:29.316040 | orchestrator | changed: [testbed-manager] 2025-06-03 15:59:29.316110 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:59:29.316126 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:59:29.316137 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:59:29.316148 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:59:29.316159 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:59:29.316170 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:59:29.316212 | orchestrator | 2025-06-03 15:59:29.316245 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-03 15:59:29.316263 | orchestrator | Tuesday 03 June 2025 15:50:56 +0000 (0:00:01.094) 0:00:02.525 ********** 2025-06-03 15:59:29.316281 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2025-06-03 15:59:29.316299 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2025-06-03 15:59:29.316317 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2025-06-03 15:59:29.316364 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2025-06-03 15:59:29.316382 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2025-06-03 15:59:29.316401 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2025-06-03 15:59:29.316414 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2025-06-03 15:59:29.316426 | orchestrator | 2025-06-03 15:59:29.316438 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2025-06-03 15:59:29.316451 | orchestrator | 2025-06-03 15:59:29.316463 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-06-03 15:59:29.316475 | orchestrator | Tuesday 03 June 2025 15:50:57 +0000 (0:00:01.812) 0:00:04.337 ********** 2025-06-03 15:59:29.316488 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:59:29.316500 | orchestrator | 2025-06-03 15:59:29.316513 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2025-06-03 15:59:29.316524 | orchestrator | Tuesday 03 June 2025 15:50:58 +0000 (0:00:00.814) 0:00:05.152 ********** 2025-06-03 15:59:29.316537 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2025-06-03 15:59:29.316550 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2025-06-03 15:59:29.316562 | orchestrator | 2025-06-03 15:59:29.316573 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2025-06-03 15:59:29.316584 | orchestrator | Tuesday 03 June 2025 15:51:02 +0000 (0:00:03.533) 0:00:08.685 ********** 2025-06-03 15:59:29.316595 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-03 15:59:29.316606 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-03 15:59:29.316616 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:59:29.316627 | orchestrator | 2025-06-03 15:59:29.316638 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-06-03 15:59:29.316648 | orchestrator | Tuesday 03 June 2025 15:51:06 +0000 (0:00:03.799) 0:00:12.484 ********** 2025-06-03 15:59:29.316659 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:59:29.316670 | orchestrator | 2025-06-03 15:59:29.316680 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2025-06-03 15:59:29.316692 | orchestrator | Tuesday 03 June 2025 15:51:07 +0000 (0:00:00.977) 0:00:13.462 ********** 2025-06-03 15:59:29.316702 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:59:29.316713 | orchestrator | 2025-06-03 15:59:29.316724 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2025-06-03 15:59:29.316734 | orchestrator | Tuesday 03 June 2025 15:51:08 +0000 (0:00:01.520) 0:00:14.983 ********** 2025-06-03 15:59:29.316745 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:59:29.316755 | orchestrator | 2025-06-03 15:59:29.316766 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-06-03 15:59:29.316776 | orchestrator | Tuesday 03 June 2025 15:51:12 +0000 (0:00:04.317) 0:00:19.300 ********** 2025-06-03 15:59:29.316787 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:59:29.316798 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:59:29.316809 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:59:29.316819 | orchestrator | 2025-06-03 15:59:29.316830 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-06-03 15:59:29.316840 | orchestrator | Tuesday 03 June 2025 15:51:13 +0000 (0:00:00.427) 0:00:19.727 ********** 2025-06-03 15:59:29.316969 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:59:29.316996 | orchestrator | 2025-06-03 15:59:29.317013 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2025-06-03 15:59:29.317047 | orchestrator | Tuesday 03 June 2025 15:51:52 +0000 (0:00:38.834) 0:00:58.562 ********** 2025-06-03 15:59:29.317064 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:59:29.317082 | orchestrator | 2025-06-03 15:59:29.317100 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-06-03 15:59:29.317118 | orchestrator | Tuesday 03 June 2025 15:52:07 +0000 (0:00:14.822) 0:01:13.385 ********** 2025-06-03 15:59:29.317130 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:59:29.317140 | orchestrator | 2025-06-03 15:59:29.317151 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-06-03 15:59:29.317162 | orchestrator | Tuesday 03 June 2025 15:52:18 +0000 (0:00:11.832) 0:01:25.218 ********** 2025-06-03 15:59:29.317201 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:59:29.317213 | orchestrator | 2025-06-03 15:59:29.317224 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2025-06-03 15:59:29.317235 | orchestrator | Tuesday 03 June 2025 15:52:19 +0000 (0:00:00.969) 0:01:26.187 ********** 2025-06-03 15:59:29.317246 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:59:29.317256 | orchestrator | 2025-06-03 15:59:29.317267 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-06-03 15:59:29.317278 | orchestrator | Tuesday 03 June 2025 15:52:20 +0000 (0:00:00.420) 0:01:26.608 ********** 2025-06-03 15:59:29.317289 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:59:29.317300 | orchestrator | 2025-06-03 15:59:29.317311 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-06-03 15:59:29.317325 | orchestrator | Tuesday 03 June 2025 15:52:20 +0000 (0:00:00.494) 0:01:27.102 ********** 2025-06-03 15:59:29.317382 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:59:29.317402 | orchestrator | 2025-06-03 15:59:29.317419 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-06-03 15:59:29.317436 | orchestrator | Tuesday 03 June 2025 15:52:39 +0000 (0:00:18.611) 0:01:45.714 ********** 2025-06-03 15:59:29.317452 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:59:29.317468 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:59:29.317486 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:59:29.317504 | orchestrator | 2025-06-03 15:59:29.317521 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2025-06-03 15:59:29.317540 | orchestrator | 2025-06-03 15:59:29.317559 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-06-03 15:59:29.317576 | orchestrator | Tuesday 03 June 2025 15:52:39 +0000 (0:00:00.276) 0:01:45.990 ********** 2025-06-03 15:59:29.317593 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:59:29.317605 | orchestrator | 2025-06-03 15:59:29.317615 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2025-06-03 15:59:29.317626 | orchestrator | Tuesday 03 June 2025 15:52:40 +0000 (0:00:00.542) 0:01:46.533 ********** 2025-06-03 15:59:29.317637 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:59:29.317647 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:59:29.317658 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:59:29.317669 | orchestrator | 2025-06-03 15:59:29.317680 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2025-06-03 15:59:29.317691 | orchestrator | Tuesday 03 June 2025 15:52:42 +0000 (0:00:02.068) 0:01:48.601 ********** 2025-06-03 15:59:29.317701 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:59:29.317712 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:59:29.317723 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:59:29.317733 | orchestrator | 2025-06-03 15:59:29.317744 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-06-03 15:59:29.317755 | orchestrator | Tuesday 03 June 2025 15:52:44 +0000 (0:00:01.807) 0:01:50.409 ********** 2025-06-03 15:59:29.317765 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:59:29.317776 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:59:29.317798 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:59:29.317809 | orchestrator | 2025-06-03 15:59:29.317820 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-06-03 15:59:29.317830 | orchestrator | Tuesday 03 June 2025 15:52:44 +0000 (0:00:00.337) 0:01:50.746 ********** 2025-06-03 15:59:29.317841 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-06-03 15:59:29.317852 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:59:29.317863 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-06-03 15:59:29.317873 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:59:29.317884 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-06-03 15:59:29.317895 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2025-06-03 15:59:29.317906 | orchestrator | 2025-06-03 15:59:29.317917 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-06-03 15:59:29.317927 | orchestrator | Tuesday 03 June 2025 15:52:52 +0000 (0:00:08.552) 0:01:59.298 ********** 2025-06-03 15:59:29.317938 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:59:29.317949 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:59:29.317960 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:59:29.317971 | orchestrator | 2025-06-03 15:59:29.317981 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-06-03 15:59:29.317992 | orchestrator | Tuesday 03 June 2025 15:52:53 +0000 (0:00:00.299) 0:01:59.598 ********** 2025-06-03 15:59:29.318003 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-06-03 15:59:29.318013 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:59:29.318092 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-06-03 15:59:29.318110 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:59:29.318126 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-06-03 15:59:29.318153 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:59:29.318172 | orchestrator | 2025-06-03 15:59:29.318189 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-06-03 15:59:29.318205 | orchestrator | Tuesday 03 June 2025 15:52:53 +0000 (0:00:00.577) 0:02:00.176 ********** 2025-06-03 15:59:29.318222 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:59:29.318239 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:59:29.318257 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:59:29.318275 | orchestrator | 2025-06-03 15:59:29.318293 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2025-06-03 15:59:29.318312 | orchestrator | Tuesday 03 June 2025 15:52:54 +0000 (0:00:00.442) 0:02:00.618 ********** 2025-06-03 15:59:29.318330 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:59:29.318375 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:59:29.318387 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:59:29.318400 | orchestrator | 2025-06-03 15:59:29.318419 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2025-06-03 15:59:29.318437 | orchestrator | Tuesday 03 June 2025 15:52:55 +0000 (0:00:01.021) 0:02:01.639 ********** 2025-06-03 15:59:29.318457 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:59:29.318493 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:59:29.318522 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:59:29.318541 | orchestrator | 2025-06-03 15:59:29.318560 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2025-06-03 15:59:29.318578 | orchestrator | Tuesday 03 June 2025 15:52:57 +0000 (0:00:01.911) 0:02:03.551 ********** 2025-06-03 15:59:29.318593 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:59:29.318612 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:59:29.318629 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:59:29.318648 | orchestrator | 2025-06-03 15:59:29.318667 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-06-03 15:59:29.318684 | orchestrator | Tuesday 03 June 2025 15:53:18 +0000 (0:00:21.345) 0:02:24.896 ********** 2025-06-03 15:59:29.318703 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:59:29.318721 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:59:29.318751 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:59:29.318769 | orchestrator | 2025-06-03 15:59:29.318789 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-06-03 15:59:29.318806 | orchestrator | Tuesday 03 June 2025 15:53:30 +0000 (0:00:11.599) 0:02:36.495 ********** 2025-06-03 15:59:29.318825 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:59:29.318843 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:59:29.318860 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:59:29.318878 | orchestrator | 2025-06-03 15:59:29.318895 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2025-06-03 15:59:29.318914 | orchestrator | Tuesday 03 June 2025 15:53:31 +0000 (0:00:01.015) 0:02:37.511 ********** 2025-06-03 15:59:29.318933 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:59:29.318950 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:59:29.318970 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:59:29.318982 | orchestrator | 2025-06-03 15:59:29.318993 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2025-06-03 15:59:29.319011 | orchestrator | Tuesday 03 June 2025 15:53:42 +0000 (0:00:11.338) 0:02:48.849 ********** 2025-06-03 15:59:29.319033 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:59:29.319061 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:59:29.319077 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:59:29.319093 | orchestrator | 2025-06-03 15:59:29.319112 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-06-03 15:59:29.319128 | orchestrator | Tuesday 03 June 2025 15:53:43 +0000 (0:00:01.226) 0:02:50.075 ********** 2025-06-03 15:59:29.319143 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:59:29.319159 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:59:29.319175 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:59:29.319191 | orchestrator | 2025-06-03 15:59:29.319207 | orchestrator | PLAY [Apply role nova] ********************************************************* 2025-06-03 15:59:29.319225 | orchestrator | 2025-06-03 15:59:29.319244 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-06-03 15:59:29.319262 | orchestrator | Tuesday 03 June 2025 15:53:44 +0000 (0:00:00.303) 0:02:50.379 ********** 2025-06-03 15:59:29.319280 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:59:29.319299 | orchestrator | 2025-06-03 15:59:29.319618 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2025-06-03 15:59:29.319632 | orchestrator | Tuesday 03 June 2025 15:53:44 +0000 (0:00:00.525) 0:02:50.904 ********** 2025-06-03 15:59:29.319643 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2025-06-03 15:59:29.319654 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2025-06-03 15:59:29.319665 | orchestrator | 2025-06-03 15:59:29.319676 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2025-06-03 15:59:29.319687 | orchestrator | Tuesday 03 June 2025 15:53:47 +0000 (0:00:03.463) 0:02:54.367 ********** 2025-06-03 15:59:29.319699 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2025-06-03 15:59:29.319711 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2025-06-03 15:59:29.319722 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2025-06-03 15:59:29.319734 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2025-06-03 15:59:29.319745 | orchestrator | 2025-06-03 15:59:29.319756 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2025-06-03 15:59:29.319768 | orchestrator | Tuesday 03 June 2025 15:53:54 +0000 (0:00:06.747) 0:03:01.114 ********** 2025-06-03 15:59:29.319779 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-03 15:59:29.319790 | orchestrator | 2025-06-03 15:59:29.319814 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2025-06-03 15:59:29.319825 | orchestrator | Tuesday 03 June 2025 15:53:58 +0000 (0:00:03.612) 0:03:04.727 ********** 2025-06-03 15:59:29.319835 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-03 15:59:29.319846 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2025-06-03 15:59:29.319857 | orchestrator | 2025-06-03 15:59:29.319868 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2025-06-03 15:59:29.319879 | orchestrator | Tuesday 03 June 2025 15:54:02 +0000 (0:00:04.019) 0:03:08.747 ********** 2025-06-03 15:59:29.319890 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-03 15:59:29.319900 | orchestrator | 2025-06-03 15:59:29.319911 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2025-06-03 15:59:29.319922 | orchestrator | Tuesday 03 June 2025 15:54:05 +0000 (0:00:03.301) 0:03:12.049 ********** 2025-06-03 15:59:29.319932 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2025-06-03 15:59:29.319943 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2025-06-03 15:59:29.319954 | orchestrator | 2025-06-03 15:59:29.319965 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-06-03 15:59:29.319997 | orchestrator | Tuesday 03 June 2025 15:54:13 +0000 (0:00:07.764) 0:03:19.814 ********** 2025-06-03 15:59:29.320016 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-03 15:59:29.320033 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-03 15:59:29.320047 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-03 15:59:29.320083 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-03 15:59:29.320097 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-03 15:59:29.320109 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-03 15:59:29.320121 | orchestrator | 2025-06-03 15:59:29.320132 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2025-06-03 15:59:29.320143 | orchestrator | Tuesday 03 June 2025 15:54:14 +0000 (0:00:01.184) 0:03:20.998 ********** 2025-06-03 15:59:29.320154 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:59:29.320165 | orchestrator | 2025-06-03 15:59:29.320175 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2025-06-03 15:59:29.320186 | orchestrator | Tuesday 03 June 2025 15:54:14 +0000 (0:00:00.106) 0:03:21.105 ********** 2025-06-03 15:59:29.320197 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:59:29.320208 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:59:29.320219 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:59:29.320230 | orchestrator | 2025-06-03 15:59:29.320241 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2025-06-03 15:59:29.320374 | orchestrator | Tuesday 03 June 2025 15:54:15 +0000 (0:00:00.438) 0:03:21.544 ********** 2025-06-03 15:59:29.320390 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-03 15:59:29.320401 | orchestrator | 2025-06-03 15:59:29.320412 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2025-06-03 15:59:29.320432 | orchestrator | Tuesday 03 June 2025 15:54:15 +0000 (0:00:00.623) 0:03:22.167 ********** 2025-06-03 15:59:29.320443 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:59:29.320454 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:59:29.320465 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:59:29.320475 | orchestrator | 2025-06-03 15:59:29.320486 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-06-03 15:59:29.320497 | orchestrator | Tuesday 03 June 2025 15:54:16 +0000 (0:00:00.265) 0:03:22.432 ********** 2025-06-03 15:59:29.320508 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:59:29.320518 | orchestrator | 2025-06-03 15:59:29.320529 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-06-03 15:59:29.320569 | orchestrator | Tuesday 03 June 2025 15:54:16 +0000 (0:00:00.633) 0:03:23.066 ********** 2025-06-03 15:59:29.320583 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-03 15:59:29.320613 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-03 15:59:29.320627 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-03 15:59:29.320648 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-03 15:59:29.320660 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-03 15:59:29.320690 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-03 15:59:29.320702 | orchestrator | 2025-06-03 15:59:29.320713 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-06-03 15:59:29.320724 | orchestrator | Tuesday 03 June 2025 15:54:19 +0000 (0:00:02.357) 0:03:25.424 ********** 2025-06-03 15:59:29.320736 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-03 15:59:29.320748 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-03 15:59:29.320768 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:59:29.320781 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-03 15:59:29.320793 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-03 15:59:29.320804 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:59:29.320830 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-03 15:59:29.320844 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-03 15:59:29.320863 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:59:29.320875 | orchestrator | 2025-06-03 15:59:29.320886 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-06-03 15:59:29.320897 | orchestrator | Tuesday 03 June 2025 15:54:19 +0000 (0:00:00.515) 0:03:25.939 ********** 2025-06-03 15:59:29.320911 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-03 15:59:29.320931 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-03 15:59:29.320948 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:59:29.320982 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-03 15:59:29.321001 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-03 15:59:29.321028 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:59:29.321046 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-03 15:59:29.321065 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-03 15:59:29.321083 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:59:29.321101 | orchestrator | 2025-06-03 15:59:29.321119 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2025-06-03 15:59:29.321136 | orchestrator | Tuesday 03 June 2025 15:54:20 +0000 (0:00:00.847) 0:03:26.786 ********** 2025-06-03 15:59:29.321177 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-03 15:59:29.321198 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-03 15:59:29.321233 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-03 15:59:29.321255 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-03 15:59:29.321295 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-03 15:59:29.321308 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-03 15:59:29.321326 | orchestrator | 2025-06-03 15:59:29.321536 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2025-06-03 15:59:29.321583 | orchestrator | Tuesday 03 June 2025 15:54:22 +0000 (0:00:02.421) 0:03:29.208 ********** 2025-06-03 15:59:29.321602 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-03 15:59:29.321622 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-03 15:59:29.321672 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-03 15:59:29.321692 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-03 15:59:29.321718 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-03 15:59:29.321735 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-03 15:59:29.321752 | orchestrator | 2025-06-03 15:59:29.321770 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2025-06-03 15:59:29.321786 | orchestrator | Tuesday 03 June 2025 15:54:28 +0000 (0:00:05.603) 0:03:34.811 ********** 2025-06-03 15:59:29.321803 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-03 15:59:29.321840 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-03 15:59:29.321859 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:59:29.321877 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-03 15:59:29.321906 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-03 15:59:29.321921 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:59:29.321933 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-03 15:59:29.321948 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-03 15:59:29.321961 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:59:29.321975 | orchestrator | 2025-06-03 15:59:29.321990 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2025-06-03 15:59:29.322004 | orchestrator | Tuesday 03 June 2025 15:54:29 +0000 (0:00:00.602) 0:03:35.413 ********** 2025-06-03 15:59:29.322053 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:59:29.322072 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:59:29.322080 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:59:29.322088 | orchestrator | 2025-06-03 15:59:29.322113 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2025-06-03 15:59:29.322122 | orchestrator | Tuesday 03 June 2025 15:54:31 +0000 (0:00:02.067) 0:03:37.481 ********** 2025-06-03 15:59:29.322130 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:59:29.322138 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:59:29.322146 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:59:29.322154 | orchestrator | 2025-06-03 15:59:29.322162 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2025-06-03 15:59:29.322170 | orchestrator | Tuesday 03 June 2025 15:54:31 +0000 (0:00:00.342) 0:03:37.823 ********** 2025-06-03 15:59:29.322178 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-03 15:59:29.322187 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-03 15:59:29.322206 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-03 15:59:29.322222 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-03 15:59:29.322230 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-03 15:59:29.322239 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-03 15:59:29.322247 | orchestrator | 2025-06-03 15:59:29.322255 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-06-03 15:59:29.322263 | orchestrator | Tuesday 03 June 2025 15:54:33 +0000 (0:00:01.823) 0:03:39.647 ********** 2025-06-03 15:59:29.322271 | orchestrator | 2025-06-03 15:59:29.322279 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-06-03 15:59:29.322287 | orchestrator | Tuesday 03 June 2025 15:54:33 +0000 (0:00:00.130) 0:03:39.777 ********** 2025-06-03 15:59:29.322295 | orchestrator | 2025-06-03 15:59:29.322302 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-06-03 15:59:29.322310 | orchestrator | Tuesday 03 June 2025 15:54:33 +0000 (0:00:00.128) 0:03:39.905 ********** 2025-06-03 15:59:29.322318 | orchestrator | 2025-06-03 15:59:29.322326 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2025-06-03 15:59:29.322354 | orchestrator | Tuesday 03 June 2025 15:54:33 +0000 (0:00:00.305) 0:03:40.210 ********** 2025-06-03 15:59:29.322362 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:59:29.322370 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:59:29.322378 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:59:29.322386 | orchestrator | 2025-06-03 15:59:29.322393 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2025-06-03 15:59:29.322401 | orchestrator | Tuesday 03 June 2025 15:55:01 +0000 (0:00:27.506) 0:04:07.717 ********** 2025-06-03 15:59:29.322409 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:59:29.322417 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:59:29.322425 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:59:29.322433 | orchestrator | 2025-06-03 15:59:29.322440 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2025-06-03 15:59:29.322453 | orchestrator | 2025-06-03 15:59:29.322461 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-06-03 15:59:29.322469 | orchestrator | Tuesday 03 June 2025 15:55:11 +0000 (0:00:10.638) 0:04:18.356 ********** 2025-06-03 15:59:29.322478 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:59:29.322488 | orchestrator | 2025-06-03 15:59:29.322496 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-06-03 15:59:29.322504 | orchestrator | Tuesday 03 June 2025 15:55:13 +0000 (0:00:01.243) 0:04:19.600 ********** 2025-06-03 15:59:29.322512 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:59:29.322520 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:59:29.322527 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:59:29.322535 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:59:29.322543 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:59:29.322551 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:59:29.322559 | orchestrator | 2025-06-03 15:59:29.322567 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2025-06-03 15:59:29.322575 | orchestrator | Tuesday 03 June 2025 15:55:13 +0000 (0:00:00.656) 0:04:20.256 ********** 2025-06-03 15:59:29.322583 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:59:29.322591 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:59:29.322598 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:59:29.322606 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-03 15:59:29.322614 | orchestrator | 2025-06-03 15:59:29.322622 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-06-03 15:59:29.322640 | orchestrator | Tuesday 03 June 2025 15:55:14 +0000 (0:00:00.812) 0:04:21.068 ********** 2025-06-03 15:59:29.322649 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2025-06-03 15:59:29.322657 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2025-06-03 15:59:29.322665 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2025-06-03 15:59:29.322673 | orchestrator | 2025-06-03 15:59:29.322681 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-06-03 15:59:29.322689 | orchestrator | Tuesday 03 June 2025 15:55:15 +0000 (0:00:00.728) 0:04:21.796 ********** 2025-06-03 15:59:29.322697 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2025-06-03 15:59:29.322707 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2025-06-03 15:59:29.322721 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2025-06-03 15:59:29.322735 | orchestrator | 2025-06-03 15:59:29.322748 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-06-03 15:59:29.322763 | orchestrator | Tuesday 03 June 2025 15:55:16 +0000 (0:00:01.238) 0:04:23.035 ********** 2025-06-03 15:59:29.322775 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2025-06-03 15:59:29.322788 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:59:29.322800 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2025-06-03 15:59:29.322813 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:59:29.322827 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2025-06-03 15:59:29.322841 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:59:29.322855 | orchestrator | 2025-06-03 15:59:29.322870 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2025-06-03 15:59:29.322884 | orchestrator | Tuesday 03 June 2025 15:55:17 +0000 (0:00:00.577) 0:04:23.613 ********** 2025-06-03 15:59:29.322899 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-03 15:59:29.322913 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-03 15:59:29.322927 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:59:29.322940 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-03 15:59:29.322962 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-03 15:59:29.322973 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:59:29.322985 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-03 15:59:29.322998 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-03 15:59:29.323010 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:59:29.323022 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2025-06-03 15:59:29.323034 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2025-06-03 15:59:29.323047 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2025-06-03 15:59:29.323059 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-06-03 15:59:29.323071 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-06-03 15:59:29.323083 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-06-03 15:59:29.323096 | orchestrator | 2025-06-03 15:59:29.323108 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2025-06-03 15:59:29.323120 | orchestrator | Tuesday 03 June 2025 15:55:19 +0000 (0:00:02.192) 0:04:25.805 ********** 2025-06-03 15:59:29.323132 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:59:29.323146 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:59:29.323159 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:59:29.323172 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:59:29.323186 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:59:29.323199 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:59:29.323212 | orchestrator | 2025-06-03 15:59:29.323225 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2025-06-03 15:59:29.323238 | orchestrator | Tuesday 03 June 2025 15:55:20 +0000 (0:00:01.514) 0:04:27.320 ********** 2025-06-03 15:59:29.323252 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:59:29.323265 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:59:29.323279 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:59:29.323292 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:59:29.323307 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:59:29.323321 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:59:29.323399 | orchestrator | 2025-06-03 15:59:29.323414 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-06-03 15:59:29.323422 | orchestrator | Tuesday 03 June 2025 15:55:22 +0000 (0:00:01.660) 0:04:28.981 ********** 2025-06-03 15:59:29.323433 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-03 15:59:29.323461 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-03 15:59:29.323479 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-03 15:59:29.323488 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-03 15:59:29.323497 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-03 15:59:29.323506 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-03 15:59:29.323514 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-03 15:59:29.323532 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-03 15:59:29.323547 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-03 15:59:29.323555 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-03 15:59:29.323564 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-03 15:59:29.323572 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-03 15:59:29.323585 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-03 15:59:29.323594 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-03 15:59:29.323608 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-03 15:59:29.323617 | orchestrator | 2025-06-03 15:59:29.323625 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-06-03 15:59:29.323633 | orchestrator | Tuesday 03 June 2025 15:55:25 +0000 (0:00:02.749) 0:04:31.730 ********** 2025-06-03 15:59:29.323673 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 15:59:29.323683 | orchestrator | 2025-06-03 15:59:29.323691 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-06-03 15:59:29.323699 | orchestrator | Tuesday 03 June 2025 15:55:26 +0000 (0:00:01.244) 0:04:32.974 ********** 2025-06-03 15:59:29.323708 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-03 15:59:29.323716 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-03 15:59:29.324245 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-03 15:59:29.324275 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-03 15:59:29.324285 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-03 15:59:29.324293 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-03 15:59:29.324302 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-03 15:59:29.324310 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-03 15:59:29.324318 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-03 15:59:29.324356 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-03 15:59:29.324375 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-03 15:59:29.324383 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-03 15:59:29.324392 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-03 15:59:29.324401 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-03 15:59:29.324409 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-03 15:59:29.324423 | orchestrator | 2025-06-03 15:59:29.324431 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-06-03 15:59:29.324439 | orchestrator | Tuesday 03 June 2025 15:55:30 +0000 (0:00:03.806) 0:04:36.781 ********** 2025-06-03 15:59:29.324458 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-03 15:59:29.324467 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-03 15:59:29.324475 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-03 15:59:29.324483 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:59:29.324492 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-03 15:59:29.324500 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-03 15:59:29.324526 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-03 15:59:29.324535 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:59:29.324543 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-03 15:59:29.324551 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-03 15:59:29.324559 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-03 15:59:29.324568 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:59:29.324576 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-03 15:59:29.324590 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-03 15:59:29.324599 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:59:29.324616 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-03 15:59:29.324624 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-03 15:59:29.324633 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:59:29.324641 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-03 15:59:29.324649 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-03 15:59:29.324657 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:59:29.324665 | orchestrator | 2025-06-03 15:59:29.324673 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-06-03 15:59:29.324681 | orchestrator | Tuesday 03 June 2025 15:55:32 +0000 (0:00:01.837) 0:04:38.618 ********** 2025-06-03 15:59:29.324689 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-03 15:59:29.324703 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-03 15:59:29.324722 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-03 15:59:29.324731 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:59:29.324739 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-03 15:59:29.324747 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-03 15:59:29.324756 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-03 15:59:29.324770 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:59:29.324778 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-03 15:59:29.324795 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-03 15:59:29.324804 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-03 15:59:29.324812 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:59:29.324822 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-03 15:59:29.324832 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-03 15:59:29.324841 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:59:29.324850 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-03 15:59:29.324864 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-03 15:59:29.324873 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:59:29.324883 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-03 15:59:29.324900 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-03 15:59:29.324910 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:59:29.324919 | orchestrator | 2025-06-03 15:59:29.324929 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-06-03 15:59:29.324938 | orchestrator | Tuesday 03 June 2025 15:55:34 +0000 (0:00:02.056) 0:04:40.674 ********** 2025-06-03 15:59:29.324947 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:59:29.324956 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:59:29.324964 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:59:29.324973 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-03 15:59:29.324983 | orchestrator | 2025-06-03 15:59:29.324992 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2025-06-03 15:59:29.325001 | orchestrator | Tuesday 03 June 2025 15:55:35 +0000 (0:00:00.922) 0:04:41.597 ********** 2025-06-03 15:59:29.325010 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-06-03 15:59:29.325019 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-06-03 15:59:29.325028 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-06-03 15:59:29.325037 | orchestrator | 2025-06-03 15:59:29.325046 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2025-06-03 15:59:29.325055 | orchestrator | Tuesday 03 June 2025 15:55:36 +0000 (0:00:01.139) 0:04:42.737 ********** 2025-06-03 15:59:29.325064 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-06-03 15:59:29.325073 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-06-03 15:59:29.325082 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-06-03 15:59:29.325091 | orchestrator | 2025-06-03 15:59:29.325100 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2025-06-03 15:59:29.325115 | orchestrator | Tuesday 03 June 2025 15:55:37 +0000 (0:00:00.971) 0:04:43.708 ********** 2025-06-03 15:59:29.325124 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:59:29.325133 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:59:29.325142 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:59:29.325151 | orchestrator | 2025-06-03 15:59:29.325161 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2025-06-03 15:59:29.325169 | orchestrator | Tuesday 03 June 2025 15:55:37 +0000 (0:00:00.531) 0:04:44.240 ********** 2025-06-03 15:59:29.325179 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:59:29.325187 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:59:29.325195 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:59:29.325203 | orchestrator | 2025-06-03 15:59:29.325211 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2025-06-03 15:59:29.325219 | orchestrator | Tuesday 03 June 2025 15:55:38 +0000 (0:00:00.595) 0:04:44.835 ********** 2025-06-03 15:59:29.325227 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-06-03 15:59:29.325235 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-06-03 15:59:29.325243 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-06-03 15:59:29.325251 | orchestrator | 2025-06-03 15:59:29.325259 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2025-06-03 15:59:29.325267 | orchestrator | Tuesday 03 June 2025 15:55:39 +0000 (0:00:01.441) 0:04:46.276 ********** 2025-06-03 15:59:29.325275 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-06-03 15:59:29.325283 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-06-03 15:59:29.325290 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-06-03 15:59:29.325298 | orchestrator | 2025-06-03 15:59:29.325306 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2025-06-03 15:59:29.325314 | orchestrator | Tuesday 03 June 2025 15:55:41 +0000 (0:00:01.273) 0:04:47.550 ********** 2025-06-03 15:59:29.325322 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-06-03 15:59:29.325330 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-06-03 15:59:29.325353 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-06-03 15:59:29.325361 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2025-06-03 15:59:29.325369 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2025-06-03 15:59:29.325376 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2025-06-03 15:59:29.325384 | orchestrator | 2025-06-03 15:59:29.325392 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2025-06-03 15:59:29.325400 | orchestrator | Tuesday 03 June 2025 15:55:44 +0000 (0:00:03.769) 0:04:51.319 ********** 2025-06-03 15:59:29.325408 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:59:29.325415 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:59:29.325423 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:59:29.325431 | orchestrator | 2025-06-03 15:59:29.325439 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2025-06-03 15:59:29.325447 | orchestrator | Tuesday 03 June 2025 15:55:45 +0000 (0:00:00.326) 0:04:51.645 ********** 2025-06-03 15:59:29.325455 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:59:29.325463 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:59:29.325471 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:59:29.325478 | orchestrator | 2025-06-03 15:59:29.325486 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2025-06-03 15:59:29.325494 | orchestrator | Tuesday 03 June 2025 15:55:45 +0000 (0:00:00.280) 0:04:51.926 ********** 2025-06-03 15:59:29.325503 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:59:29.325511 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:59:29.325518 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:59:29.325526 | orchestrator | 2025-06-03 15:59:29.325542 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2025-06-03 15:59:29.325551 | orchestrator | Tuesday 03 June 2025 15:55:47 +0000 (0:00:01.517) 0:04:53.443 ********** 2025-06-03 15:59:29.325567 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-06-03 15:59:29.325576 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-06-03 15:59:29.325584 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-06-03 15:59:29.325592 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-06-03 15:59:29.325600 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-06-03 15:59:29.325608 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-06-03 15:59:29.325616 | orchestrator | 2025-06-03 15:59:29.325624 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2025-06-03 15:59:29.325632 | orchestrator | Tuesday 03 June 2025 15:55:50 +0000 (0:00:03.254) 0:04:56.698 ********** 2025-06-03 15:59:29.325640 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-06-03 15:59:29.325648 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-06-03 15:59:29.325656 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-06-03 15:59:29.325663 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-06-03 15:59:29.325671 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:59:29.325679 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-06-03 15:59:29.325687 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:59:29.325695 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-06-03 15:59:29.325703 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:59:29.325711 | orchestrator | 2025-06-03 15:59:29.325718 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2025-06-03 15:59:29.325726 | orchestrator | Tuesday 03 June 2025 15:55:53 +0000 (0:00:03.276) 0:04:59.975 ********** 2025-06-03 15:59:29.325734 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:59:29.325742 | orchestrator | 2025-06-03 15:59:29.325750 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2025-06-03 15:59:29.325758 | orchestrator | Tuesday 03 June 2025 15:55:53 +0000 (0:00:00.124) 0:05:00.100 ********** 2025-06-03 15:59:29.325766 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:59:29.325774 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:59:29.325782 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:59:29.325789 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:59:29.325797 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:59:29.325805 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:59:29.325813 | orchestrator | 2025-06-03 15:59:29.325821 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2025-06-03 15:59:29.325829 | orchestrator | Tuesday 03 June 2025 15:55:54 +0000 (0:00:00.760) 0:05:00.860 ********** 2025-06-03 15:59:29.325836 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-06-03 15:59:29.325844 | orchestrator | 2025-06-03 15:59:29.325852 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2025-06-03 15:59:29.325860 | orchestrator | Tuesday 03 June 2025 15:55:55 +0000 (0:00:00.689) 0:05:01.550 ********** 2025-06-03 15:59:29.325868 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:59:29.325876 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:59:29.325884 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:59:29.325891 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:59:29.325899 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:59:29.325907 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:59:29.325915 | orchestrator | 2025-06-03 15:59:29.325923 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2025-06-03 15:59:29.325936 | orchestrator | Tuesday 03 June 2025 15:55:55 +0000 (0:00:00.610) 0:05:02.161 ********** 2025-06-03 15:59:29.325944 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-03 15:59:29.325972 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-03 15:59:29.325982 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-03 15:59:29.325990 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-03 15:59:29.325999 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-03 15:59:29.326012 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-03 15:59:29.326119 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-03 15:59:29.326140 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-03 15:59:29.326149 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-03 15:59:29.326157 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-03 15:59:29.326166 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-03 15:59:29.326174 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-03 15:59:29.326189 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-03 15:59:29.326206 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-03 15:59:29.326215 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-03 15:59:29.326223 | orchestrator | 2025-06-03 15:59:29.326231 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2025-06-03 15:59:29.326239 | orchestrator | Tuesday 03 June 2025 15:55:59 +0000 (0:00:03.710) 0:05:05.871 ********** 2025-06-03 15:59:29.326248 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-03 15:59:29.326256 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-03 15:59:29.326270 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-03 15:59:29.326278 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-03 15:59:29.326296 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-03 15:59:29.326305 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-03 15:59:29.326313 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-03 15:59:29.326327 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-03 15:59:29.326361 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-03 15:59:29.326378 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-03 15:59:29.326387 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-03 15:59:29.326396 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-03 15:59:29.326404 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-03 15:59:29.326419 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-03 15:59:29.326427 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-03 15:59:29.326435 | orchestrator | 2025-06-03 15:59:29.326443 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2025-06-03 15:59:29.326451 | orchestrator | Tuesday 03 June 2025 15:56:05 +0000 (0:00:06.137) 0:05:12.009 ********** 2025-06-03 15:59:29.326459 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:59:29.326467 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:59:29.326475 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:59:29.326483 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:59:29.326491 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:59:29.326498 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:59:29.326506 | orchestrator | 2025-06-03 15:59:29.326514 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2025-06-03 15:59:29.326522 | orchestrator | Tuesday 03 June 2025 15:56:07 +0000 (0:00:01.746) 0:05:13.755 ********** 2025-06-03 15:59:29.326530 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-06-03 15:59:29.326537 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-06-03 15:59:29.326545 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-06-03 15:59:29.326553 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-06-03 15:59:29.326571 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-06-03 15:59:29.326580 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-06-03 15:59:29.326588 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-06-03 15:59:29.326596 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:59:29.326604 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-06-03 15:59:29.326612 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:59:29.326620 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-06-03 15:59:29.326627 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:59:29.326635 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-06-03 15:59:29.326644 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-06-03 15:59:29.326652 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-06-03 15:59:29.326659 | orchestrator | 2025-06-03 15:59:29.326667 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2025-06-03 15:59:29.326681 | orchestrator | Tuesday 03 June 2025 15:56:10 +0000 (0:00:03.535) 0:05:17.291 ********** 2025-06-03 15:59:29.326689 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:59:29.326697 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:59:29.326705 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:59:29.326713 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:59:29.326721 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:59:29.326728 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:59:29.326736 | orchestrator | 2025-06-03 15:59:29.326744 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2025-06-03 15:59:29.326752 | orchestrator | Tuesday 03 June 2025 15:56:11 +0000 (0:00:00.790) 0:05:18.081 ********** 2025-06-03 15:59:29.326760 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-06-03 15:59:29.326768 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-06-03 15:59:29.326776 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-06-03 15:59:29.326784 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-06-03 15:59:29.326792 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-06-03 15:59:29.326799 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-06-03 15:59:29.326807 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-06-03 15:59:29.326815 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-06-03 15:59:29.326823 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-06-03 15:59:29.326831 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-06-03 15:59:29.326839 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:59:29.326846 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-06-03 15:59:29.326854 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:59:29.326862 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-06-03 15:59:29.326870 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-06-03 15:59:29.326878 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:59:29.326886 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-06-03 15:59:29.326894 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-06-03 15:59:29.326902 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-06-03 15:59:29.326910 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-06-03 15:59:29.326918 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-06-03 15:59:29.326925 | orchestrator | 2025-06-03 15:59:29.326934 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2025-06-03 15:59:29.326942 | orchestrator | Tuesday 03 June 2025 15:56:16 +0000 (0:00:05.069) 0:05:23.150 ********** 2025-06-03 15:59:29.326949 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-06-03 15:59:29.326957 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-06-03 15:59:29.326981 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-06-03 15:59:29.326989 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-06-03 15:59:29.326997 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-06-03 15:59:29.327005 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-06-03 15:59:29.327013 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-06-03 15:59:29.327021 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-06-03 15:59:29.327029 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-06-03 15:59:29.327037 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-06-03 15:59:29.327044 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-06-03 15:59:29.327052 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-06-03 15:59:29.327060 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-06-03 15:59:29.327068 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:59:29.327076 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-06-03 15:59:29.327083 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:59:29.327091 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-06-03 15:59:29.327099 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:59:29.327106 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-06-03 15:59:29.327114 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-06-03 15:59:29.327122 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-06-03 15:59:29.327130 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-06-03 15:59:29.327138 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-06-03 15:59:29.327146 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-06-03 15:59:29.327153 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-06-03 15:59:29.327161 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-06-03 15:59:29.327169 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-06-03 15:59:29.327177 | orchestrator | 2025-06-03 15:59:29.327185 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2025-06-03 15:59:29.327193 | orchestrator | Tuesday 03 June 2025 15:56:23 +0000 (0:00:06.907) 0:05:30.057 ********** 2025-06-03 15:59:29.327201 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:59:29.327209 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:59:29.327216 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:59:29.327224 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:59:29.327232 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:59:29.327240 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:59:29.327247 | orchestrator | 2025-06-03 15:59:29.327255 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2025-06-03 15:59:29.327263 | orchestrator | Tuesday 03 June 2025 15:56:24 +0000 (0:00:00.568) 0:05:30.626 ********** 2025-06-03 15:59:29.327271 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:59:29.327279 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:59:29.327287 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:59:29.327294 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:59:29.327302 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:59:29.327316 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:59:29.327324 | orchestrator | 2025-06-03 15:59:29.327346 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2025-06-03 15:59:29.327355 | orchestrator | Tuesday 03 June 2025 15:56:25 +0000 (0:00:00.780) 0:05:31.406 ********** 2025-06-03 15:59:29.327363 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:59:29.327370 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:59:29.327378 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:59:29.327386 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:59:29.327394 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:59:29.327402 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:59:29.327410 | orchestrator | 2025-06-03 15:59:29.327418 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2025-06-03 15:59:29.327426 | orchestrator | Tuesday 03 June 2025 15:56:26 +0000 (0:00:01.789) 0:05:33.196 ********** 2025-06-03 15:59:29.327443 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-03 15:59:29.327452 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-03 15:59:29.327460 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-03 15:59:29.327469 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:59:29.327477 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-03 15:59:29.327491 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-03 15:59:29.327500 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-03 15:59:29.327508 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:59:29.327537 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-03 15:59:29.327555 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-03 15:59:29.327563 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-03 15:59:29.327572 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:59:29.327586 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-03 15:59:29.327594 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-03 15:59:29.327602 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:59:29.327611 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-03 15:59:29.327628 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-03 15:59:29.327637 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:59:29.327645 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-03 15:59:29.327653 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-03 15:59:29.327661 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:59:29.327670 | orchestrator | 2025-06-03 15:59:29.327678 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2025-06-03 15:59:29.327686 | orchestrator | Tuesday 03 June 2025 15:56:28 +0000 (0:00:01.541) 0:05:34.737 ********** 2025-06-03 15:59:29.327699 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-06-03 15:59:29.327707 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-06-03 15:59:29.327716 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:59:29.327723 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-06-03 15:59:29.327731 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-06-03 15:59:29.327739 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:59:29.327747 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-06-03 15:59:29.327755 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-06-03 15:59:29.327763 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:59:29.327771 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-06-03 15:59:29.327779 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-06-03 15:59:29.327787 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:59:29.327795 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-06-03 15:59:29.327803 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-06-03 15:59:29.327810 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:59:29.327818 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-06-03 15:59:29.327826 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-06-03 15:59:29.327834 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:59:29.327842 | orchestrator | 2025-06-03 15:59:29.327850 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2025-06-03 15:59:29.327858 | orchestrator | Tuesday 03 June 2025 15:56:29 +0000 (0:00:00.633) 0:05:35.371 ********** 2025-06-03 15:59:29.327866 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-03 15:59:29.327883 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-03 15:59:29.327893 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-03 15:59:29.327906 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-03 15:59:29.327915 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-03 15:59:29.327923 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-03 15:59:29.327931 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-03 15:59:29.327949 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-03 15:59:29.327958 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-03 15:59:29.327974 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-03 15:59:29.327982 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-03 15:59:29.327990 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-03 15:59:29.327998 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-03 15:59:29.328014 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-03 15:59:29.328023 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-03 15:59:29.328036 | orchestrator | 2025-06-03 15:59:29.328044 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-06-03 15:59:29.328052 | orchestrator | Tuesday 03 June 2025 15:56:31 +0000 (0:00:02.929) 0:05:38.300 ********** 2025-06-03 15:59:29.328060 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:59:29.328068 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:59:29.328076 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:59:29.328084 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:59:29.328092 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:59:29.328100 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:59:29.328107 | orchestrator | 2025-06-03 15:59:29.328115 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-06-03 15:59:29.328123 | orchestrator | Tuesday 03 June 2025 15:56:32 +0000 (0:00:00.574) 0:05:38.875 ********** 2025-06-03 15:59:29.328131 | orchestrator | 2025-06-03 15:59:29.328138 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-06-03 15:59:29.328146 | orchestrator | Tuesday 03 June 2025 15:56:32 +0000 (0:00:00.344) 0:05:39.220 ********** 2025-06-03 15:59:29.328154 | orchestrator | 2025-06-03 15:59:29.328162 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-06-03 15:59:29.328170 | orchestrator | Tuesday 03 June 2025 15:56:32 +0000 (0:00:00.136) 0:05:39.357 ********** 2025-06-03 15:59:29.328177 | orchestrator | 2025-06-03 15:59:29.328185 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-06-03 15:59:29.328193 | orchestrator | Tuesday 03 June 2025 15:56:33 +0000 (0:00:00.142) 0:05:39.499 ********** 2025-06-03 15:59:29.328201 | orchestrator | 2025-06-03 15:59:29.328209 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-06-03 15:59:29.328217 | orchestrator | Tuesday 03 June 2025 15:56:33 +0000 (0:00:00.132) 0:05:39.632 ********** 2025-06-03 15:59:29.328224 | orchestrator | 2025-06-03 15:59:29.328232 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-06-03 15:59:29.328240 | orchestrator | Tuesday 03 June 2025 15:56:33 +0000 (0:00:00.124) 0:05:39.756 ********** 2025-06-03 15:59:29.328248 | orchestrator | 2025-06-03 15:59:29.328255 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2025-06-03 15:59:29.328263 | orchestrator | Tuesday 03 June 2025 15:56:33 +0000 (0:00:00.129) 0:05:39.886 ********** 2025-06-03 15:59:29.328271 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:59:29.328279 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:59:29.328287 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:59:29.328295 | orchestrator | 2025-06-03 15:59:29.328302 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2025-06-03 15:59:29.328310 | orchestrator | Tuesday 03 June 2025 15:56:46 +0000 (0:00:12.588) 0:05:52.474 ********** 2025-06-03 15:59:29.328318 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:59:29.328326 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:59:29.328347 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:59:29.328355 | orchestrator | 2025-06-03 15:59:29.328363 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2025-06-03 15:59:29.328371 | orchestrator | Tuesday 03 June 2025 15:56:58 +0000 (0:00:12.824) 0:06:05.298 ********** 2025-06-03 15:59:29.328379 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:59:29.328387 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:59:29.328394 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:59:29.328402 | orchestrator | 2025-06-03 15:59:29.328410 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2025-06-03 15:59:29.328418 | orchestrator | Tuesday 03 June 2025 15:57:19 +0000 (0:00:20.136) 0:06:25.434 ********** 2025-06-03 15:59:29.328426 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:59:29.328439 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:59:29.328447 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:59:29.328455 | orchestrator | 2025-06-03 15:59:29.328463 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2025-06-03 15:59:29.328471 | orchestrator | Tuesday 03 June 2025 15:57:56 +0000 (0:00:37.921) 0:07:03.356 ********** 2025-06-03 15:59:29.328479 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:59:29.328486 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:59:29.328494 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:59:29.328502 | orchestrator | 2025-06-03 15:59:29.328510 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2025-06-03 15:59:29.328518 | orchestrator | Tuesday 03 June 2025 15:57:57 +0000 (0:00:01.010) 0:07:04.366 ********** 2025-06-03 15:59:29.328525 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:59:29.328533 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:59:29.328541 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:59:29.328549 | orchestrator | 2025-06-03 15:59:29.328557 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2025-06-03 15:59:29.328575 | orchestrator | Tuesday 03 June 2025 15:57:58 +0000 (0:00:00.784) 0:07:05.151 ********** 2025-06-03 15:59:29.328583 | orchestrator | changed: [testbed-node-5] 2025-06-03 15:59:29.328591 | orchestrator | changed: [testbed-node-4] 2025-06-03 15:59:29.328599 | orchestrator | changed: [testbed-node-3] 2025-06-03 15:59:29.328607 | orchestrator | 2025-06-03 15:59:29.328615 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2025-06-03 15:59:29.328623 | orchestrator | Tuesday 03 June 2025 15:58:20 +0000 (0:00:21.856) 0:07:27.007 ********** 2025-06-03 15:59:29.328631 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:59:29.328639 | orchestrator | 2025-06-03 15:59:29.328647 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2025-06-03 15:59:29.328655 | orchestrator | Tuesday 03 June 2025 15:58:20 +0000 (0:00:00.130) 0:07:27.138 ********** 2025-06-03 15:59:29.328662 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:59:29.328670 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:59:29.328678 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:59:29.328686 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:59:29.328693 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:59:29.328701 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2025-06-03 15:59:29.328710 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-06-03 15:59:29.328718 | orchestrator | 2025-06-03 15:59:29.328725 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2025-06-03 15:59:29.328733 | orchestrator | Tuesday 03 June 2025 15:58:43 +0000 (0:00:22.644) 0:07:49.783 ********** 2025-06-03 15:59:29.328741 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:59:29.328749 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:59:29.328757 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:59:29.328765 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:59:29.328773 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:59:29.328780 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:59:29.328788 | orchestrator | 2025-06-03 15:59:29.328796 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2025-06-03 15:59:29.328804 | orchestrator | Tuesday 03 June 2025 15:58:51 +0000 (0:00:08.076) 0:07:57.859 ********** 2025-06-03 15:59:29.328812 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:59:29.328820 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:59:29.328828 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:59:29.328835 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:59:29.328843 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:59:29.328851 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-3 2025-06-03 15:59:29.328859 | orchestrator | 2025-06-03 15:59:29.328867 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-06-03 15:59:29.328880 | orchestrator | Tuesday 03 June 2025 15:58:54 +0000 (0:00:03.368) 0:08:01.228 ********** 2025-06-03 15:59:29.328888 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-06-03 15:59:29.328896 | orchestrator | 2025-06-03 15:59:29.328904 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-06-03 15:59:29.328912 | orchestrator | Tuesday 03 June 2025 15:59:06 +0000 (0:00:11.983) 0:08:13.212 ********** 2025-06-03 15:59:29.328919 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-06-03 15:59:29.328927 | orchestrator | 2025-06-03 15:59:29.328935 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2025-06-03 15:59:29.328943 | orchestrator | Tuesday 03 June 2025 15:59:08 +0000 (0:00:01.345) 0:08:14.557 ********** 2025-06-03 15:59:29.328951 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:59:29.328959 | orchestrator | 2025-06-03 15:59:29.328967 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2025-06-03 15:59:29.328975 | orchestrator | Tuesday 03 June 2025 15:59:09 +0000 (0:00:01.278) 0:08:15.835 ********** 2025-06-03 15:59:29.328982 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-06-03 15:59:29.328990 | orchestrator | 2025-06-03 15:59:29.328998 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2025-06-03 15:59:29.329006 | orchestrator | Tuesday 03 June 2025 15:59:20 +0000 (0:00:10.838) 0:08:26.674 ********** 2025-06-03 15:59:29.329014 | orchestrator | ok: [testbed-node-3] 2025-06-03 15:59:29.329022 | orchestrator | ok: [testbed-node-4] 2025-06-03 15:59:29.329030 | orchestrator | ok: [testbed-node-5] 2025-06-03 15:59:29.329037 | orchestrator | ok: [testbed-node-0] 2025-06-03 15:59:29.329045 | orchestrator | ok: [testbed-node-1] 2025-06-03 15:59:29.329053 | orchestrator | ok: [testbed-node-2] 2025-06-03 15:59:29.329061 | orchestrator | 2025-06-03 15:59:29.329069 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2025-06-03 15:59:29.329077 | orchestrator | 2025-06-03 15:59:29.329085 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2025-06-03 15:59:29.329092 | orchestrator | Tuesday 03 June 2025 15:59:22 +0000 (0:00:01.718) 0:08:28.393 ********** 2025-06-03 15:59:29.329100 | orchestrator | changed: [testbed-node-0] 2025-06-03 15:59:29.329108 | orchestrator | changed: [testbed-node-1] 2025-06-03 15:59:29.329116 | orchestrator | changed: [testbed-node-2] 2025-06-03 15:59:29.329124 | orchestrator | 2025-06-03 15:59:29.329132 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2025-06-03 15:59:29.329140 | orchestrator | 2025-06-03 15:59:29.329147 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2025-06-03 15:59:29.329155 | orchestrator | Tuesday 03 June 2025 15:59:23 +0000 (0:00:01.099) 0:08:29.492 ********** 2025-06-03 15:59:29.329163 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:59:29.329171 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:59:29.329179 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:59:29.329187 | orchestrator | 2025-06-03 15:59:29.329195 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2025-06-03 15:59:29.329202 | orchestrator | 2025-06-03 15:59:29.329210 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2025-06-03 15:59:29.329218 | orchestrator | Tuesday 03 June 2025 15:59:23 +0000 (0:00:00.523) 0:08:30.016 ********** 2025-06-03 15:59:29.329226 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2025-06-03 15:59:29.329238 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-06-03 15:59:29.329251 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-06-03 15:59:29.329259 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2025-06-03 15:59:29.329267 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2025-06-03 15:59:29.329275 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2025-06-03 15:59:29.329286 | orchestrator | skipping: [testbed-node-3] 2025-06-03 15:59:29.329295 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2025-06-03 15:59:29.329308 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-06-03 15:59:29.329316 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-06-03 15:59:29.329323 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2025-06-03 15:59:29.329374 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2025-06-03 15:59:29.329384 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2025-06-03 15:59:29.329392 | orchestrator | skipping: [testbed-node-4] 2025-06-03 15:59:29.329400 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2025-06-03 15:59:29.329407 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-06-03 15:59:29.329415 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-06-03 15:59:29.329423 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2025-06-03 15:59:29.329431 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2025-06-03 15:59:29.329439 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2025-06-03 15:59:29.329447 | orchestrator | skipping: [testbed-node-5] 2025-06-03 15:59:29.329454 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2025-06-03 15:59:29.329462 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-06-03 15:59:29.329470 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-06-03 15:59:29.329478 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2025-06-03 15:59:29.329486 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2025-06-03 15:59:29.329494 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2025-06-03 15:59:29.329501 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:59:29.329509 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2025-06-03 15:59:29.329517 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-06-03 15:59:29.329525 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-06-03 15:59:29.329533 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2025-06-03 15:59:29.329540 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2025-06-03 15:59:29.329548 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2025-06-03 15:59:29.329556 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:59:29.329564 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2025-06-03 15:59:29.329571 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-06-03 15:59:29.329579 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-06-03 15:59:29.329587 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2025-06-03 15:59:29.329595 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2025-06-03 15:59:29.329603 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2025-06-03 15:59:29.329611 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:59:29.329619 | orchestrator | 2025-06-03 15:59:29.329627 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2025-06-03 15:59:29.329634 | orchestrator | 2025-06-03 15:59:29.329642 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2025-06-03 15:59:29.329650 | orchestrator | Tuesday 03 June 2025 15:59:24 +0000 (0:00:01.341) 0:08:31.357 ********** 2025-06-03 15:59:29.329658 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2025-06-03 15:59:29.329666 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2025-06-03 15:59:29.329674 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:59:29.329681 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2025-06-03 15:59:29.329689 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2025-06-03 15:59:29.329697 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:59:29.329705 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2025-06-03 15:59:29.329718 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2025-06-03 15:59:29.329726 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:59:29.329734 | orchestrator | 2025-06-03 15:59:29.329742 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2025-06-03 15:59:29.329750 | orchestrator | 2025-06-03 15:59:29.329758 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2025-06-03 15:59:29.329765 | orchestrator | Tuesday 03 June 2025 15:59:25 +0000 (0:00:00.745) 0:08:32.103 ********** 2025-06-03 15:59:29.329773 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:59:29.329781 | orchestrator | 2025-06-03 15:59:29.329789 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2025-06-03 15:59:29.329797 | orchestrator | 2025-06-03 15:59:29.329805 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2025-06-03 15:59:29.329813 | orchestrator | Tuesday 03 June 2025 15:59:26 +0000 (0:00:00.650) 0:08:32.753 ********** 2025-06-03 15:59:29.329821 | orchestrator | skipping: [testbed-node-0] 2025-06-03 15:59:29.329828 | orchestrator | skipping: [testbed-node-1] 2025-06-03 15:59:29.329836 | orchestrator | skipping: [testbed-node-2] 2025-06-03 15:59:29.329844 | orchestrator | 2025-06-03 15:59:29.329852 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-03 15:59:29.329860 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-03 15:59:29.329878 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2025-06-03 15:59:29.329887 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-06-03 15:59:29.329895 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-06-03 15:59:29.329903 | orchestrator | testbed-node-3 : ok=43  changed=27  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-06-03 15:59:29.329911 | orchestrator | testbed-node-4 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2025-06-03 15:59:29.329919 | orchestrator | testbed-node-5 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2025-06-03 15:59:29.329927 | orchestrator | 2025-06-03 15:59:29.329935 | orchestrator | 2025-06-03 15:59:29.329943 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-03 15:59:29.329951 | orchestrator | Tuesday 03 June 2025 15:59:26 +0000 (0:00:00.428) 0:08:33.182 ********** 2025-06-03 15:59:29.329959 | orchestrator | =============================================================================== 2025-06-03 15:59:29.329967 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 38.84s 2025-06-03 15:59:29.329975 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 37.92s 2025-06-03 15:59:29.329982 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 27.51s 2025-06-03 15:59:29.329990 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 22.64s 2025-06-03 15:59:29.329998 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 21.86s 2025-06-03 15:59:29.330006 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 21.35s 2025-06-03 15:59:29.330038 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 20.14s 2025-06-03 15:59:29.330048 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 18.61s 2025-06-03 15:59:29.330056 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 14.82s 2025-06-03 15:59:29.330070 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 12.82s 2025-06-03 15:59:29.330077 | orchestrator | nova-cell : Restart nova-conductor container --------------------------- 12.59s 2025-06-03 15:59:29.330085 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 11.98s 2025-06-03 15:59:29.330093 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 11.83s 2025-06-03 15:59:29.330101 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 11.60s 2025-06-03 15:59:29.330109 | orchestrator | nova-cell : Create cell ------------------------------------------------ 11.34s 2025-06-03 15:59:29.330116 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 10.84s 2025-06-03 15:59:29.330124 | orchestrator | nova : Restart nova-api container -------------------------------------- 10.64s 2025-06-03 15:59:29.330132 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------- 8.55s 2025-06-03 15:59:29.330140 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------- 8.08s 2025-06-03 15:59:29.330147 | orchestrator | service-ks-register : nova | Granting user roles ------------------------ 7.76s 2025-06-03 15:59:29.330155 | orchestrator | 2025-06-03 15:59:29 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-03 15:59:32.361480 | orchestrator | 2025-06-03 15:59:32 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-03 15:59:35.404801 | orchestrator | 2025-06-03 15:59:35 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-03 15:59:38.439265 | orchestrator | 2025-06-03 15:59:38 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-03 15:59:41.483088 | orchestrator | 2025-06-03 15:59:41 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-03 15:59:44.525988 | orchestrator | 2025-06-03 15:59:44 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-03 15:59:47.569045 | orchestrator | 2025-06-03 15:59:47 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-03 15:59:50.610261 | orchestrator | 2025-06-03 15:59:50 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-03 15:59:53.654774 | orchestrator | 2025-06-03 15:59:53 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-03 15:59:56.693836 | orchestrator | 2025-06-03 15:59:56 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-03 15:59:59.733597 | orchestrator | 2025-06-03 15:59:59 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-03 16:00:02.777098 | orchestrator | 2025-06-03 16:00:02 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-03 16:00:05.823525 | orchestrator | 2025-06-03 16:00:05 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-03 16:00:08.868046 | orchestrator | 2025-06-03 16:00:08 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-03 16:00:11.908316 | orchestrator | 2025-06-03 16:00:11 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-03 16:00:14.945976 | orchestrator | 2025-06-03 16:00:14 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-03 16:00:17.983635 | orchestrator | 2025-06-03 16:00:17 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-03 16:00:21.037001 | orchestrator | 2025-06-03 16:00:21 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-03 16:00:24.075946 | orchestrator | 2025-06-03 16:00:24 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-03 16:00:27.115428 | orchestrator | 2025-06-03 16:00:27 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-03 16:00:30.158406 | orchestrator | 2025-06-03 16:00:30.412663 | orchestrator | 2025-06-03 16:00:30.417984 | orchestrator | --> DEPLOY IN A NUTSHELL -- END -- Tue Jun 3 16:00:30 UTC 2025 2025-06-03 16:00:30.420705 | orchestrator | 2025-06-03 16:00:30.925598 | orchestrator | ok: Runtime: 0:36:10.840769 2025-06-03 16:00:31.207517 | 2025-06-03 16:00:31.207656 | TASK [Bootstrap services] 2025-06-03 16:00:31.994934 | orchestrator | 2025-06-03 16:00:31.995104 | orchestrator | # BOOTSTRAP 2025-06-03 16:00:31.995125 | orchestrator | 2025-06-03 16:00:31.995138 | orchestrator | + set -e 2025-06-03 16:00:31.995151 | orchestrator | + echo 2025-06-03 16:00:31.995164 | orchestrator | + echo '# BOOTSTRAP' 2025-06-03 16:00:31.995179 | orchestrator | + echo 2025-06-03 16:00:31.995215 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2025-06-03 16:00:32.006735 | orchestrator | + set -e 2025-06-03 16:00:32.006827 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2025-06-03 16:00:36.191568 | orchestrator | 2025-06-03 16:00:36 | INFO  | It takes a moment until task 35ccde4c-6790-4ec0-bbdd-cc8d07af8a89 (flavor-manager) has been started and output is visible here. 2025-06-03 16:00:40.094380 | orchestrator | 2025-06-03 16:00:40 | INFO  | Flavor SCS-1V-4 created 2025-06-03 16:00:40.436121 | orchestrator | 2025-06-03 16:00:40 | INFO  | Flavor SCS-2V-8 created 2025-06-03 16:00:40.884183 | orchestrator | 2025-06-03 16:00:40 | INFO  | Flavor SCS-4V-16 created 2025-06-03 16:00:41.062687 | orchestrator | 2025-06-03 16:00:41 | INFO  | Flavor SCS-8V-32 created 2025-06-03 16:00:41.187168 | orchestrator | 2025-06-03 16:00:41 | INFO  | Flavor SCS-1V-2 created 2025-06-03 16:00:41.353400 | orchestrator | 2025-06-03 16:00:41 | INFO  | Flavor SCS-2V-4 created 2025-06-03 16:00:41.480110 | orchestrator | 2025-06-03 16:00:41 | INFO  | Flavor SCS-4V-8 created 2025-06-03 16:00:41.621725 | orchestrator | 2025-06-03 16:00:41 | INFO  | Flavor SCS-8V-16 created 2025-06-03 16:00:41.774207 | orchestrator | 2025-06-03 16:00:41 | INFO  | Flavor SCS-16V-32 created 2025-06-03 16:00:41.907578 | orchestrator | 2025-06-03 16:00:41 | INFO  | Flavor SCS-1V-8 created 2025-06-03 16:00:42.043176 | orchestrator | 2025-06-03 16:00:42 | INFO  | Flavor SCS-2V-16 created 2025-06-03 16:00:42.178207 | orchestrator | 2025-06-03 16:00:42 | INFO  | Flavor SCS-4V-32 created 2025-06-03 16:00:42.315477 | orchestrator | 2025-06-03 16:00:42 | INFO  | Flavor SCS-1L-1 created 2025-06-03 16:00:42.453432 | orchestrator | 2025-06-03 16:00:42 | INFO  | Flavor SCS-2V-4-20s created 2025-06-03 16:00:42.608153 | orchestrator | 2025-06-03 16:00:42 | INFO  | Flavor SCS-4V-16-100s created 2025-06-03 16:00:42.731236 | orchestrator | 2025-06-03 16:00:42 | INFO  | Flavor SCS-1V-4-10 created 2025-06-03 16:00:42.861785 | orchestrator | 2025-06-03 16:00:42 | INFO  | Flavor SCS-2V-8-20 created 2025-06-03 16:00:42.987856 | orchestrator | 2025-06-03 16:00:42 | INFO  | Flavor SCS-4V-16-50 created 2025-06-03 16:00:43.112982 | orchestrator | 2025-06-03 16:00:43 | INFO  | Flavor SCS-8V-32-100 created 2025-06-03 16:00:43.240567 | orchestrator | 2025-06-03 16:00:43 | INFO  | Flavor SCS-1V-2-5 created 2025-06-03 16:00:43.353488 | orchestrator | 2025-06-03 16:00:43 | INFO  | Flavor SCS-2V-4-10 created 2025-06-03 16:00:43.500927 | orchestrator | 2025-06-03 16:00:43 | INFO  | Flavor SCS-4V-8-20 created 2025-06-03 16:00:43.642329 | orchestrator | 2025-06-03 16:00:43 | INFO  | Flavor SCS-8V-16-50 created 2025-06-03 16:00:43.788312 | orchestrator | 2025-06-03 16:00:43 | INFO  | Flavor SCS-16V-32-100 created 2025-06-03 16:00:43.916180 | orchestrator | 2025-06-03 16:00:43 | INFO  | Flavor SCS-1V-8-20 created 2025-06-03 16:00:44.049395 | orchestrator | 2025-06-03 16:00:44 | INFO  | Flavor SCS-2V-16-50 created 2025-06-03 16:00:44.196247 | orchestrator | 2025-06-03 16:00:44 | INFO  | Flavor SCS-4V-32-100 created 2025-06-03 16:00:44.331778 | orchestrator | 2025-06-03 16:00:44 | INFO  | Flavor SCS-1L-1-5 created 2025-06-03 16:00:46.513405 | orchestrator | 2025-06-03 16:00:46 | INFO  | Trying to run play bootstrap-basic in environment openstack 2025-06-03 16:00:46.517672 | orchestrator | Registering Redlock._acquired_script 2025-06-03 16:00:46.517704 | orchestrator | Registering Redlock._extend_script 2025-06-03 16:00:46.517735 | orchestrator | Registering Redlock._release_script 2025-06-03 16:00:46.577300 | orchestrator | 2025-06-03 16:00:46 | INFO  | Task e066c8fe-24ad-4dcf-93ca-0272933cf1c3 (bootstrap-basic) was prepared for execution. 2025-06-03 16:00:46.577400 | orchestrator | 2025-06-03 16:00:46 | INFO  | It takes a moment until task e066c8fe-24ad-4dcf-93ca-0272933cf1c3 (bootstrap-basic) has been started and output is visible here. 2025-06-03 16:00:50.743025 | orchestrator | 2025-06-03 16:00:50.743140 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2025-06-03 16:00:50.748087 | orchestrator | 2025-06-03 16:00:50.748212 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-03 16:00:50.748410 | orchestrator | Tuesday 03 June 2025 16:00:50 +0000 (0:00:00.076) 0:00:00.076 ********** 2025-06-03 16:00:52.582758 | orchestrator | ok: [localhost] 2025-06-03 16:00:52.582867 | orchestrator | 2025-06-03 16:00:52.582877 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2025-06-03 16:00:52.582883 | orchestrator | Tuesday 03 June 2025 16:00:52 +0000 (0:00:01.839) 0:00:01.916 ********** 2025-06-03 16:01:00.336653 | orchestrator | ok: [localhost] 2025-06-03 16:01:00.336882 | orchestrator | 2025-06-03 16:01:00.337121 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2025-06-03 16:01:00.337146 | orchestrator | Tuesday 03 June 2025 16:01:00 +0000 (0:00:07.755) 0:00:09.671 ********** 2025-06-03 16:01:07.578334 | orchestrator | changed: [localhost] 2025-06-03 16:01:07.578487 | orchestrator | 2025-06-03 16:01:07.578627 | orchestrator | TASK [Get volume type local] *************************************************** 2025-06-03 16:01:07.579194 | orchestrator | Tuesday 03 June 2025 16:01:07 +0000 (0:00:07.241) 0:00:16.912 ********** 2025-06-03 16:01:14.273646 | orchestrator | ok: [localhost] 2025-06-03 16:01:14.273754 | orchestrator | 2025-06-03 16:01:14.273776 | orchestrator | TASK [Create volume type local] ************************************************ 2025-06-03 16:01:14.273789 | orchestrator | Tuesday 03 June 2025 16:01:14 +0000 (0:00:06.694) 0:00:23.607 ********** 2025-06-03 16:01:20.203210 | orchestrator | changed: [localhost] 2025-06-03 16:01:20.203801 | orchestrator | 2025-06-03 16:01:20.204518 | orchestrator | TASK [Create public network] *************************************************** 2025-06-03 16:01:20.205664 | orchestrator | Tuesday 03 June 2025 16:01:20 +0000 (0:00:05.930) 0:00:29.538 ********** 2025-06-03 16:01:27.177787 | orchestrator | changed: [localhost] 2025-06-03 16:01:27.177877 | orchestrator | 2025-06-03 16:01:27.178826 | orchestrator | TASK [Set public network to default] ******************************************* 2025-06-03 16:01:27.179151 | orchestrator | Tuesday 03 June 2025 16:01:27 +0000 (0:00:06.974) 0:00:36.512 ********** 2025-06-03 16:01:34.216694 | orchestrator | changed: [localhost] 2025-06-03 16:01:34.217677 | orchestrator | 2025-06-03 16:01:34.217782 | orchestrator | TASK [Create public subnet] **************************************************** 2025-06-03 16:01:34.218896 | orchestrator | Tuesday 03 June 2025 16:01:34 +0000 (0:00:07.038) 0:00:43.550 ********** 2025-06-03 16:01:39.361499 | orchestrator | changed: [localhost] 2025-06-03 16:01:39.361829 | orchestrator | 2025-06-03 16:01:39.362772 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2025-06-03 16:01:39.363607 | orchestrator | Tuesday 03 June 2025 16:01:39 +0000 (0:00:05.145) 0:00:48.696 ********** 2025-06-03 16:01:43.601351 | orchestrator | changed: [localhost] 2025-06-03 16:01:43.602623 | orchestrator | 2025-06-03 16:01:43.604714 | orchestrator | TASK [Create manager role] ***************************************************** 2025-06-03 16:01:43.606686 | orchestrator | Tuesday 03 June 2025 16:01:43 +0000 (0:00:04.239) 0:00:52.935 ********** 2025-06-03 16:01:47.182625 | orchestrator | ok: [localhost] 2025-06-03 16:01:47.182768 | orchestrator | 2025-06-03 16:01:47.182781 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-03 16:01:47.183138 | orchestrator | 2025-06-03 16:01:47 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-03 16:01:47.183166 | orchestrator | 2025-06-03 16:01:47 | INFO  | Please wait and do not abort execution. 2025-06-03 16:01:47.184476 | orchestrator | localhost : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-03 16:01:47.184566 | orchestrator | 2025-06-03 16:01:47.184743 | orchestrator | 2025-06-03 16:01:47.185532 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-03 16:01:47.187765 | orchestrator | Tuesday 03 June 2025 16:01:47 +0000 (0:00:03.578) 0:00:56.514 ********** 2025-06-03 16:01:47.187808 | orchestrator | =============================================================================== 2025-06-03 16:01:47.188957 | orchestrator | Get volume type LUKS ---------------------------------------------------- 7.76s 2025-06-03 16:01:47.189003 | orchestrator | Create volume type LUKS ------------------------------------------------- 7.24s 2025-06-03 16:01:47.189011 | orchestrator | Set public network to default ------------------------------------------- 7.04s 2025-06-03 16:01:47.189300 | orchestrator | Create public network --------------------------------------------------- 6.97s 2025-06-03 16:01:47.189797 | orchestrator | Get volume type local --------------------------------------------------- 6.69s 2025-06-03 16:01:47.190349 | orchestrator | Create volume type local ------------------------------------------------ 5.93s 2025-06-03 16:01:47.190460 | orchestrator | Create public subnet ---------------------------------------------------- 5.15s 2025-06-03 16:01:47.190507 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 4.24s 2025-06-03 16:01:47.191009 | orchestrator | Create manager role ----------------------------------------------------- 3.58s 2025-06-03 16:01:47.191247 | orchestrator | Gathering Facts --------------------------------------------------------- 1.84s 2025-06-03 16:01:49.493634 | orchestrator | 2025-06-03 16:01:49 | INFO  | It takes a moment until task 4cee6f81-3f28-490f-9264-02afce917596 (image-manager) has been started and output is visible here. 2025-06-03 16:01:53.041504 | orchestrator | 2025-06-03 16:01:53 | INFO  | Processing image 'Cirros 0.6.2' 2025-06-03 16:01:53.110614 | orchestrator | 2025-06-03 16:01:53 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img: 302 2025-06-03 16:01:53.110976 | orchestrator | 2025-06-03 16:01:53 | INFO  | Importing image Cirros 0.6.2 2025-06-03 16:01:53.111603 | orchestrator | 2025-06-03 16:01:53 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2025-06-03 16:01:55.097482 | orchestrator | 2025-06-03 16:01:55 | INFO  | Waiting for image to leave queued state... 2025-06-03 16:01:57.143613 | orchestrator | 2025-06-03 16:01:57 | INFO  | Waiting for import to complete... 2025-06-03 16:02:07.462295 | orchestrator | 2025-06-03 16:02:07 | INFO  | Import of 'Cirros 0.6.2' successfully completed, reloading images 2025-06-03 16:02:07.699667 | orchestrator | 2025-06-03 16:02:07 | INFO  | Checking parameters of 'Cirros 0.6.2' 2025-06-03 16:02:07.700138 | orchestrator | 2025-06-03 16:02:07 | INFO  | Setting internal_version = 0.6.2 2025-06-03 16:02:07.701368 | orchestrator | 2025-06-03 16:02:07 | INFO  | Setting image_original_user = cirros 2025-06-03 16:02:07.702385 | orchestrator | 2025-06-03 16:02:07 | INFO  | Adding tag os:cirros 2025-06-03 16:02:08.007607 | orchestrator | 2025-06-03 16:02:08 | INFO  | Setting property architecture: x86_64 2025-06-03 16:02:08.317258 | orchestrator | 2025-06-03 16:02:08 | INFO  | Setting property hw_disk_bus: scsi 2025-06-03 16:02:08.583591 | orchestrator | 2025-06-03 16:02:08 | INFO  | Setting property hw_rng_model: virtio 2025-06-03 16:02:08.791089 | orchestrator | 2025-06-03 16:02:08 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-06-03 16:02:08.992164 | orchestrator | 2025-06-03 16:02:08 | INFO  | Setting property hw_watchdog_action: reset 2025-06-03 16:02:09.229268 | orchestrator | 2025-06-03 16:02:09 | INFO  | Setting property hypervisor_type: qemu 2025-06-03 16:02:09.456399 | orchestrator | 2025-06-03 16:02:09 | INFO  | Setting property os_distro: cirros 2025-06-03 16:02:09.712135 | orchestrator | 2025-06-03 16:02:09 | INFO  | Setting property replace_frequency: never 2025-06-03 16:02:09.983854 | orchestrator | 2025-06-03 16:02:09 | INFO  | Setting property uuid_validity: none 2025-06-03 16:02:10.192775 | orchestrator | 2025-06-03 16:02:10 | INFO  | Setting property provided_until: none 2025-06-03 16:02:10.412631 | orchestrator | 2025-06-03 16:02:10 | INFO  | Setting property image_description: Cirros 2025-06-03 16:02:10.620995 | orchestrator | 2025-06-03 16:02:10 | INFO  | Setting property image_name: Cirros 2025-06-03 16:02:10.849209 | orchestrator | 2025-06-03 16:02:10 | INFO  | Setting property internal_version: 0.6.2 2025-06-03 16:02:11.089826 | orchestrator | 2025-06-03 16:02:11 | INFO  | Setting property image_original_user: cirros 2025-06-03 16:02:11.329514 | orchestrator | 2025-06-03 16:02:11 | INFO  | Setting property os_version: 0.6.2 2025-06-03 16:02:11.584423 | orchestrator | 2025-06-03 16:02:11 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2025-06-03 16:02:11.804969 | orchestrator | 2025-06-03 16:02:11 | INFO  | Setting property image_build_date: 2023-05-30 2025-06-03 16:02:12.054257 | orchestrator | 2025-06-03 16:02:12 | INFO  | Checking status of 'Cirros 0.6.2' 2025-06-03 16:02:12.054361 | orchestrator | 2025-06-03 16:02:12 | INFO  | Checking visibility of 'Cirros 0.6.2' 2025-06-03 16:02:12.055035 | orchestrator | 2025-06-03 16:02:12 | INFO  | Setting visibility of 'Cirros 0.6.2' to 'public' 2025-06-03 16:02:12.269810 | orchestrator | 2025-06-03 16:02:12 | INFO  | Processing image 'Cirros 0.6.3' 2025-06-03 16:02:12.495338 | orchestrator | 2025-06-03 16:02:12 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img: 302 2025-06-03 16:02:12.495429 | orchestrator | 2025-06-03 16:02:12 | INFO  | Importing image Cirros 0.6.3 2025-06-03 16:02:12.495441 | orchestrator | 2025-06-03 16:02:12 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2025-06-03 16:02:13.681197 | orchestrator | 2025-06-03 16:02:13 | INFO  | Waiting for image to leave queued state... 2025-06-03 16:02:15.774096 | orchestrator | 2025-06-03 16:02:15 | INFO  | Waiting for import to complete... 2025-06-03 16:02:25.919242 | orchestrator | 2025-06-03 16:02:25 | INFO  | Import of 'Cirros 0.6.3' successfully completed, reloading images 2025-06-03 16:02:26.208793 | orchestrator | 2025-06-03 16:02:26 | INFO  | Checking parameters of 'Cirros 0.6.3' 2025-06-03 16:02:26.210541 | orchestrator | 2025-06-03 16:02:26 | INFO  | Setting internal_version = 0.6.3 2025-06-03 16:02:26.211417 | orchestrator | 2025-06-03 16:02:26 | INFO  | Setting image_original_user = cirros 2025-06-03 16:02:26.212838 | orchestrator | 2025-06-03 16:02:26 | INFO  | Adding tag os:cirros 2025-06-03 16:02:26.469272 | orchestrator | 2025-06-03 16:02:26 | INFO  | Setting property architecture: x86_64 2025-06-03 16:02:26.661521 | orchestrator | 2025-06-03 16:02:26 | INFO  | Setting property hw_disk_bus: scsi 2025-06-03 16:02:26.874982 | orchestrator | 2025-06-03 16:02:26 | INFO  | Setting property hw_rng_model: virtio 2025-06-03 16:02:27.111317 | orchestrator | 2025-06-03 16:02:27 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-06-03 16:02:27.340179 | orchestrator | 2025-06-03 16:02:27 | INFO  | Setting property hw_watchdog_action: reset 2025-06-03 16:02:27.556843 | orchestrator | 2025-06-03 16:02:27 | INFO  | Setting property hypervisor_type: qemu 2025-06-03 16:02:27.803347 | orchestrator | 2025-06-03 16:02:27 | INFO  | Setting property os_distro: cirros 2025-06-03 16:02:28.028146 | orchestrator | 2025-06-03 16:02:28 | INFO  | Setting property replace_frequency: never 2025-06-03 16:02:28.248604 | orchestrator | 2025-06-03 16:02:28 | INFO  | Setting property uuid_validity: none 2025-06-03 16:02:28.487504 | orchestrator | 2025-06-03 16:02:28 | INFO  | Setting property provided_until: none 2025-06-03 16:02:28.709428 | orchestrator | 2025-06-03 16:02:28 | INFO  | Setting property image_description: Cirros 2025-06-03 16:02:28.918992 | orchestrator | 2025-06-03 16:02:28 | INFO  | Setting property image_name: Cirros 2025-06-03 16:02:29.450384 | orchestrator | 2025-06-03 16:02:29 | INFO  | Setting property internal_version: 0.6.3 2025-06-03 16:02:29.642607 | orchestrator | 2025-06-03 16:02:29 | INFO  | Setting property image_original_user: cirros 2025-06-03 16:02:29.840783 | orchestrator | 2025-06-03 16:02:29 | INFO  | Setting property os_version: 0.6.3 2025-06-03 16:02:30.092830 | orchestrator | 2025-06-03 16:02:30 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2025-06-03 16:02:30.313386 | orchestrator | 2025-06-03 16:02:30 | INFO  | Setting property image_build_date: 2024-09-26 2025-06-03 16:02:30.528957 | orchestrator | 2025-06-03 16:02:30 | INFO  | Checking status of 'Cirros 0.6.3' 2025-06-03 16:02:30.529891 | orchestrator | 2025-06-03 16:02:30 | INFO  | Checking visibility of 'Cirros 0.6.3' 2025-06-03 16:02:30.530911 | orchestrator | 2025-06-03 16:02:30 | INFO  | Setting visibility of 'Cirros 0.6.3' to 'public' 2025-06-03 16:02:31.625957 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh 2025-06-03 16:02:33.502340 | orchestrator | 2025-06-03 16:02:33 | INFO  | date: 2025-06-03 2025-06-03 16:02:33.502441 | orchestrator | 2025-06-03 16:02:33 | INFO  | image: octavia-amphora-haproxy-2024.2.20250603.qcow2 2025-06-03 16:02:33.502461 | orchestrator | 2025-06-03 16:02:33 | INFO  | url: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250603.qcow2 2025-06-03 16:02:33.502496 | orchestrator | 2025-06-03 16:02:33 | INFO  | checksum_url: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250603.qcow2.CHECKSUM 2025-06-03 16:02:33.523187 | orchestrator | 2025-06-03 16:02:33 | INFO  | checksum: 7f57cebcf47e21267f186897438d3e2a516fb862e8a8c745c06679ffa81da60f 2025-06-03 16:02:33.591681 | orchestrator | 2025-06-03 16:02:33 | INFO  | It takes a moment until task 1bad0ecc-2af3-487e-a40c-5b8e814fab8c (image-manager) has been started and output is visible here. 2025-06-03 16:02:33.831594 | orchestrator | /usr/local/lib/python3.13/site-packages/openstack_image_manager/__init__.py:5: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-06-03 16:02:33.831789 | orchestrator | from pkg_resources import get_distribution, DistributionNotFound 2025-06-03 16:02:36.007535 | orchestrator | 2025-06-03 16:02:35 | INFO  | Processing image 'OpenStack Octavia Amphora 2025-06-03' 2025-06-03 16:02:36.018610 | orchestrator | 2025-06-03 16:02:36 | INFO  | Tested URL https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250603.qcow2: 200 2025-06-03 16:02:36.018987 | orchestrator | 2025-06-03 16:02:36 | INFO  | Importing image OpenStack Octavia Amphora 2025-06-03 2025-06-03 16:02:36.020237 | orchestrator | 2025-06-03 16:02:36 | INFO  | Importing from URL https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250603.qcow2 2025-06-03 16:02:36.450447 | orchestrator | 2025-06-03 16:02:36 | INFO  | Waiting for image to leave queued state... 2025-06-03 16:02:38.500459 | orchestrator | 2025-06-03 16:02:38 | INFO  | Waiting for import to complete... 2025-06-03 16:02:48.615993 | orchestrator | 2025-06-03 16:02:48 | INFO  | Waiting for import to complete... 2025-06-03 16:02:58.702378 | orchestrator | 2025-06-03 16:02:58 | INFO  | Waiting for import to complete... 2025-06-03 16:03:09.024301 | orchestrator | 2025-06-03 16:03:09 | INFO  | Waiting for import to complete... 2025-06-03 16:03:19.100020 | orchestrator | 2025-06-03 16:03:19 | INFO  | Waiting for import to complete... 2025-06-03 16:03:29.221256 | orchestrator | 2025-06-03 16:03:29 | INFO  | Import of 'OpenStack Octavia Amphora 2025-06-03' successfully completed, reloading images 2025-06-03 16:03:29.605976 | orchestrator | 2025-06-03 16:03:29 | INFO  | Checking parameters of 'OpenStack Octavia Amphora 2025-06-03' 2025-06-03 16:03:29.607391 | orchestrator | 2025-06-03 16:03:29 | INFO  | Setting internal_version = 2025-06-03 2025-06-03 16:03:29.608646 | orchestrator | 2025-06-03 16:03:29 | INFO  | Setting image_original_user = ubuntu 2025-06-03 16:03:29.609591 | orchestrator | 2025-06-03 16:03:29 | INFO  | Adding tag amphora 2025-06-03 16:03:29.826606 | orchestrator | 2025-06-03 16:03:29 | INFO  | Adding tag os:ubuntu 2025-06-03 16:03:30.113218 | orchestrator | 2025-06-03 16:03:30 | INFO  | Setting property architecture: x86_64 2025-06-03 16:03:30.309175 | orchestrator | 2025-06-03 16:03:30 | INFO  | Setting property hw_disk_bus: scsi 2025-06-03 16:03:30.513784 | orchestrator | 2025-06-03 16:03:30 | INFO  | Setting property hw_rng_model: virtio 2025-06-03 16:03:30.777345 | orchestrator | 2025-06-03 16:03:30 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-06-03 16:03:30.984400 | orchestrator | 2025-06-03 16:03:30 | INFO  | Setting property hw_watchdog_action: reset 2025-06-03 16:03:31.200988 | orchestrator | 2025-06-03 16:03:31 | INFO  | Setting property hypervisor_type: qemu 2025-06-03 16:03:31.417923 | orchestrator | 2025-06-03 16:03:31 | INFO  | Setting property os_distro: ubuntu 2025-06-03 16:03:31.603484 | orchestrator | 2025-06-03 16:03:31 | INFO  | Setting property replace_frequency: quarterly 2025-06-03 16:03:31.822502 | orchestrator | 2025-06-03 16:03:31 | INFO  | Setting property uuid_validity: last-1 2025-06-03 16:03:32.063265 | orchestrator | 2025-06-03 16:03:32 | INFO  | Setting property provided_until: none 2025-06-03 16:03:32.290250 | orchestrator | 2025-06-03 16:03:32 | INFO  | Setting property image_description: OpenStack Octavia Amphora 2025-06-03 16:03:32.507197 | orchestrator | 2025-06-03 16:03:32 | INFO  | Setting property image_name: OpenStack Octavia Amphora 2025-06-03 16:03:32.730277 | orchestrator | 2025-06-03 16:03:32 | INFO  | Setting property internal_version: 2025-06-03 2025-06-03 16:03:32.953711 | orchestrator | 2025-06-03 16:03:32 | INFO  | Setting property image_original_user: ubuntu 2025-06-03 16:03:33.193787 | orchestrator | 2025-06-03 16:03:33 | INFO  | Setting property os_version: 2025-06-03 2025-06-03 16:03:33.434206 | orchestrator | 2025-06-03 16:03:33 | INFO  | Setting property image_source: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250603.qcow2 2025-06-03 16:03:33.636436 | orchestrator | 2025-06-03 16:03:33 | INFO  | Setting property image_build_date: 2025-06-03 2025-06-03 16:03:33.883586 | orchestrator | 2025-06-03 16:03:33 | INFO  | Checking status of 'OpenStack Octavia Amphora 2025-06-03' 2025-06-03 16:03:33.883980 | orchestrator | 2025-06-03 16:03:33 | INFO  | Checking visibility of 'OpenStack Octavia Amphora 2025-06-03' 2025-06-03 16:03:34.085978 | orchestrator | 2025-06-03 16:03:34 | INFO  | Processing image 'Cirros 0.6.3' (removal candidate) 2025-06-03 16:03:34.086157 | orchestrator | 2025-06-03 16:03:34 | WARNING  | No image definition found for 'Cirros 0.6.3', image will be ignored 2025-06-03 16:03:34.086790 | orchestrator | 2025-06-03 16:03:34 | INFO  | Processing image 'Cirros 0.6.2' (removal candidate) 2025-06-03 16:03:34.087794 | orchestrator | 2025-06-03 16:03:34 | WARNING  | No image definition found for 'Cirros 0.6.2', image will be ignored 2025-06-03 16:03:34.893214 | orchestrator | ok: Runtime: 0:03:02.926643 2025-06-03 16:03:34.954246 | 2025-06-03 16:03:34.954422 | TASK [Run checks] 2025-06-03 16:03:35.664005 | orchestrator | + set -e 2025-06-03 16:03:35.664212 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-06-03 16:03:35.664236 | orchestrator | ++ export INTERACTIVE=false 2025-06-03 16:03:35.664256 | orchestrator | ++ INTERACTIVE=false 2025-06-03 16:03:35.664270 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-06-03 16:03:35.664282 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-06-03 16:03:35.664297 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-06-03 16:03:35.664901 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2025-06-03 16:03:35.671536 | orchestrator | 2025-06-03 16:03:35.671667 | orchestrator | # CHECK 2025-06-03 16:03:35.671697 | orchestrator | 2025-06-03 16:03:35.671721 | orchestrator | ++ export MANAGER_VERSION=latest 2025-06-03 16:03:35.671750 | orchestrator | ++ MANAGER_VERSION=latest 2025-06-03 16:03:35.671771 | orchestrator | + echo 2025-06-03 16:03:35.671791 | orchestrator | + echo '# CHECK' 2025-06-03 16:03:35.671810 | orchestrator | + echo 2025-06-03 16:03:35.671836 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-06-03 16:03:35.672674 | orchestrator | ++ semver latest 5.0.0 2025-06-03 16:03:35.733361 | orchestrator | 2025-06-03 16:03:35.733447 | orchestrator | ## Containers @ testbed-manager 2025-06-03 16:03:35.733460 | orchestrator | 2025-06-03 16:03:35.733470 | orchestrator | + [[ -1 -eq -1 ]] 2025-06-03 16:03:35.733478 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-06-03 16:03:35.733485 | orchestrator | + echo 2025-06-03 16:03:35.733493 | orchestrator | + echo '## Containers @ testbed-manager' 2025-06-03 16:03:35.733501 | orchestrator | + echo 2025-06-03 16:03:35.733508 | orchestrator | + osism container testbed-manager ps 2025-06-03 16:03:37.752309 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-06-03 16:03:37.752406 | orchestrator | a95549a0e7ba registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_blackbox_exporter 2025-06-03 16:03:37.752427 | orchestrator | 8e297a86c047 registry.osism.tech/kolla/prometheus-alertmanager:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_alertmanager 2025-06-03 16:03:37.752440 | orchestrator | ecf282bc4555 registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_cadvisor 2025-06-03 16:03:37.752448 | orchestrator | ba80c02819c8 registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_node_exporter 2025-06-03 16:03:37.752456 | orchestrator | 4020ac09e8ce registry.osism.tech/kolla/prometheus-v2-server:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_server 2025-06-03 16:03:37.752468 | orchestrator | 1e161344038e registry.osism.tech/osism/cephclient:reef "/usr/bin/dumb-init …" 19 minutes ago Up 19 minutes cephclient 2025-06-03 16:03:37.752476 | orchestrator | 5572d50e470d registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes cron 2025-06-03 16:03:37.752483 | orchestrator | f4ceab838e68 registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes kolla_toolbox 2025-06-03 16:03:37.752490 | orchestrator | 0f8b933b24aa phpmyadmin/phpmyadmin:5.2 "/docker-entrypoint.…" 32 minutes ago Up 31 minutes (healthy) 80/tcp phpmyadmin 2025-06-03 16:03:37.752520 | orchestrator | 60af7cc24a12 registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 32 minutes ago Up 32 minutes fluentd 2025-06-03 16:03:37.752527 | orchestrator | 6fdd7bc97aa9 registry.osism.tech/osism/openstackclient:2024.2 "/usr/bin/dumb-init …" 33 minutes ago Up 33 minutes openstackclient 2025-06-03 16:03:37.752535 | orchestrator | f8144fee42e6 registry.osism.tech/osism/homer:v25.05.2 "/bin/sh /entrypoint…" 33 minutes ago Up 33 minutes (healthy) 8080/tcp homer 2025-06-03 16:03:37.752543 | orchestrator | e01acb493f84 registry.osism.tech/dockerhub/ubuntu/squid:6.1-23.10_beta "entrypoint.sh -f /e…" 54 minutes ago Up 53 minutes (healthy) 192.168.16.5:3128->3128/tcp squid 2025-06-03 16:03:37.752552 | orchestrator | 6f39e8a00cb5 registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" 57 minutes ago Up 39 minutes (healthy) manager-inventory_reconciler-1 2025-06-03 16:03:37.752558 | orchestrator | 4e732599cdcb registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" 57 minutes ago Up 40 minutes (healthy) ceph-ansible 2025-06-03 16:03:37.752577 | orchestrator | c75432732210 registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" 57 minutes ago Up 40 minutes (healthy) osism-kubernetes 2025-06-03 16:03:37.752586 | orchestrator | 077c4be86ef2 registry.osism.tech/osism/kolla-ansible:2024.2 "/entrypoint.sh osis…" 57 minutes ago Up 40 minutes (healthy) kolla-ansible 2025-06-03 16:03:37.752591 | orchestrator | 7db35c1abc99 registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" 57 minutes ago Up 40 minutes (healthy) osism-ansible 2025-06-03 16:03:37.752596 | orchestrator | b87d97d9a95d registry.osism.tech/osism/ara-server:1.7.2 "sh -c '/wait && /ru…" 57 minutes ago Up 40 minutes (healthy) 8000/tcp manager-ara-server-1 2025-06-03 16:03:37.752604 | orchestrator | d4531466cfd8 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 57 minutes ago Up 40 minutes (healthy) manager-flower-1 2025-06-03 16:03:37.752612 | orchestrator | 43c3cb4d18f1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 57 minutes ago Up 40 minutes (healthy) manager-beat-1 2025-06-03 16:03:37.752619 | orchestrator | 490ef9695e9d registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 57 minutes ago Up 40 minutes (healthy) 192.168.16.5:8000->8000/tcp manager-api-1 2025-06-03 16:03:37.752626 | orchestrator | 7393c4c74b83 registry.osism.tech/dockerhub/library/redis:7.4.4-alpine "docker-entrypoint.s…" 57 minutes ago Up 40 minutes (healthy) 6379/tcp manager-redis-1 2025-06-03 16:03:37.752633 | orchestrator | 498d8ac4250d registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 57 minutes ago Up 40 minutes (healthy) manager-openstack-1 2025-06-03 16:03:37.752647 | orchestrator | 3758ea45dc4a registry.osism.tech/dockerhub/library/mariadb:11.7.2 "docker-entrypoint.s…" 57 minutes ago Up 40 minutes (healthy) 3306/tcp manager-mariadb-1 2025-06-03 16:03:37.752654 | orchestrator | 9b2483d7e993 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 57 minutes ago Up 40 minutes (healthy) manager-listener-1 2025-06-03 16:03:37.752661 | orchestrator | a9eb370043fd registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" 57 minutes ago Up 40 minutes (healthy) osismclient 2025-06-03 16:03:37.752669 | orchestrator | 260da244624c registry.osism.tech/dockerhub/library/traefik:v3.4.1 "/entrypoint.sh trae…" 58 minutes ago Up 58 minutes (healthy) 192.168.16.5:80->80/tcp, 192.168.16.5:443->443/tcp, 192.168.16.5:8122->8080/tcp traefik 2025-06-03 16:03:37.992551 | orchestrator | 2025-06-03 16:03:37.992679 | orchestrator | ## Images @ testbed-manager 2025-06-03 16:03:37.992707 | orchestrator | 2025-06-03 16:03:37.992729 | orchestrator | + echo 2025-06-03 16:03:37.992750 | orchestrator | + echo '## Images @ testbed-manager' 2025-06-03 16:03:37.992771 | orchestrator | + echo 2025-06-03 16:03:37.992791 | orchestrator | + osism container testbed-manager images 2025-06-03 16:03:40.064834 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-06-03 16:03:40.064927 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 0fdc644c8234 7 hours ago 747MB 2025-06-03 16:03:40.064934 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 dec51591a0e8 7 hours ago 629MB 2025-06-03 16:03:40.064939 | orchestrator | registry.osism.tech/kolla/cron 2024.2 0f9cf6fe7555 7 hours ago 319MB 2025-06-03 16:03:40.064943 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 644b1b5e2c11 7 hours ago 359MB 2025-06-03 16:03:40.064947 | orchestrator | registry.osism.tech/kolla/prometheus-blackbox-exporter 2024.2 9cea06ffd1e0 7 hours ago 361MB 2025-06-03 16:03:40.064951 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 b1843478586a 7 hours ago 411MB 2025-06-03 16:03:40.064971 | orchestrator | registry.osism.tech/kolla/prometheus-v2-server 2024.2 f21b38b8d8bf 7 hours ago 892MB 2025-06-03 16:03:40.064976 | orchestrator | registry.osism.tech/kolla/prometheus-alertmanager 2024.2 c8e6cace06e5 7 hours ago 457MB 2025-06-03 16:03:40.064980 | orchestrator | registry.osism.tech/osism/osism-ansible latest a471926978f9 8 hours ago 577MB 2025-06-03 16:03:40.064984 | orchestrator | registry.osism.tech/osism/ceph-ansible reef 5ca2ad11866a 8 hours ago 538MB 2025-06-03 16:03:40.064988 | orchestrator | registry.osism.tech/osism/kolla-ansible 2024.2 86c7ad44b6c8 8 hours ago 574MB 2025-06-03 16:03:40.064991 | orchestrator | registry.osism.tech/osism/inventory-reconciler latest be6dc7a950fc 8 hours ago 309MB 2025-06-03 16:03:40.064996 | orchestrator | registry.osism.tech/osism/osism-kubernetes latest 22880e988a69 8 hours ago 1.21GB 2025-06-03 16:03:40.065000 | orchestrator | registry.osism.tech/osism/homer v25.05.2 d16a1b460037 13 hours ago 11.5MB 2025-06-03 16:03:40.065003 | orchestrator | registry.osism.tech/osism/openstackclient 2024.2 5ede29b4dda4 13 hours ago 225MB 2025-06-03 16:03:40.065007 | orchestrator | registry.osism.tech/osism/cephclient reef 296436cb69a4 13 hours ago 454MB 2025-06-03 16:03:40.065011 | orchestrator | registry.osism.tech/osism/osism latest 17996927b7b0 16 hours ago 297MB 2025-06-03 16:03:40.065032 | orchestrator | registry.osism.tech/dockerhub/library/redis 7.4.4-alpine 7ff232a1fe04 5 days ago 41.4MB 2025-06-03 16:03:40.065036 | orchestrator | registry.osism.tech/dockerhub/library/traefik v3.4.1 ff0a241c8a0a 7 days ago 224MB 2025-06-03 16:03:40.065040 | orchestrator | registry.osism.tech/dockerhub/library/mariadb 11.7.2 6b3ebe9793bb 3 months ago 328MB 2025-06-03 16:03:40.065044 | orchestrator | phpmyadmin/phpmyadmin 5.2 0276a66ce322 4 months ago 571MB 2025-06-03 16:03:40.065047 | orchestrator | registry.osism.tech/osism/ara-server 1.7.2 bb44122eb176 9 months ago 300MB 2025-06-03 16:03:40.065051 | orchestrator | registry.osism.tech/dockerhub/ubuntu/squid 6.1-23.10_beta 34b6bbbcf74b 11 months ago 146MB 2025-06-03 16:03:40.309437 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-06-03 16:03:40.309829 | orchestrator | ++ semver latest 5.0.0 2025-06-03 16:03:40.364188 | orchestrator | 2025-06-03 16:03:40.364280 | orchestrator | ## Containers @ testbed-node-0 2025-06-03 16:03:40.364291 | orchestrator | 2025-06-03 16:03:40.364298 | orchestrator | + [[ -1 -eq -1 ]] 2025-06-03 16:03:40.364304 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-06-03 16:03:40.364310 | orchestrator | + echo 2025-06-03 16:03:40.364318 | orchestrator | + echo '## Containers @ testbed-node-0' 2025-06-03 16:03:40.364325 | orchestrator | + echo 2025-06-03 16:03:40.364331 | orchestrator | + osism container testbed-node-0 ps 2025-06-03 16:03:42.426103 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-06-03 16:03:42.426225 | orchestrator | c75e426cf75d registry.osism.tech/kolla/nova-novncproxy:2024.2 "dumb-init --single-…" 6 minutes ago Up 6 minutes (healthy) nova_novncproxy 2025-06-03 16:03:42.426245 | orchestrator | 2bfb57c076db registry.osism.tech/kolla/nova-conductor:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) nova_conductor 2025-06-03 16:03:42.426258 | orchestrator | cbfcb18fc09e registry.osism.tech/kolla/nova-api:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) nova_api 2025-06-03 16:03:42.426269 | orchestrator | 5b572a38572f registry.osism.tech/kolla/nova-scheduler:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_scheduler 2025-06-03 16:03:42.426280 | orchestrator | 2161c9b16a21 registry.osism.tech/kolla/grafana:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes grafana 2025-06-03 16:03:42.426292 | orchestrator | 9998446d715b registry.osism.tech/kolla/cinder-scheduler:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) cinder_scheduler 2025-06-03 16:03:42.426303 | orchestrator | 7ce94ca73c65 registry.osism.tech/kolla/glance-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) glance_api 2025-06-03 16:03:42.426315 | orchestrator | 572837020bd7 registry.osism.tech/kolla/cinder-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) cinder_api 2025-06-03 16:03:42.426326 | orchestrator | 99a89e31d792 registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_elasticsearch_exporter 2025-06-03 16:03:42.426358 | orchestrator | a5e65755009e registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_cadvisor 2025-06-03 16:03:42.426371 | orchestrator | 1406d6324b66 registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_memcached_exporter 2025-06-03 16:03:42.426382 | orchestrator | 5ba414fc8d8b registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_mysqld_exporter 2025-06-03 16:03:42.426419 | orchestrator | 94f3308eb0cb registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_node_exporter 2025-06-03 16:03:42.426431 | orchestrator | 0b26b829cb68 registry.osism.tech/kolla/magnum-conductor:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) magnum_conductor 2025-06-03 16:03:42.426442 | orchestrator | c66b85621e23 registry.osism.tech/kolla/magnum-api:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) magnum_api 2025-06-03 16:03:42.426453 | orchestrator | a61f3864d949 registry.osism.tech/kolla/neutron-server:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) neutron_server 2025-06-03 16:03:42.426464 | orchestrator | f56e8927f905 registry.osism.tech/kolla/placement-api:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) placement_api 2025-06-03 16:03:42.426476 | orchestrator | e9544d143531 registry.osism.tech/kolla/designate-worker:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) designate_worker 2025-06-03 16:03:42.426487 | orchestrator | b516b17bde5f registry.osism.tech/kolla/designate-mdns:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) designate_mdns 2025-06-03 16:03:42.426498 | orchestrator | de4291f6f5b9 registry.osism.tech/kolla/designate-producer:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) designate_producer 2025-06-03 16:03:42.426509 | orchestrator | e24ba77e566f registry.osism.tech/kolla/designate-central:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) designate_central 2025-06-03 16:03:42.426539 | orchestrator | 4aa52859e090 registry.osism.tech/kolla/designate-api:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) designate_api 2025-06-03 16:03:42.426551 | orchestrator | 6bcb62a9e180 registry.osism.tech/kolla/designate-backend-bind9:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) designate_backend_bind9 2025-06-03 16:03:42.426562 | orchestrator | a0d58d0b9484 registry.osism.tech/kolla/barbican-worker:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) barbican_worker 2025-06-03 16:03:42.426573 | orchestrator | c0b4e2d417b2 registry.osism.tech/kolla/barbican-keystone-listener:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) barbican_keystone_listener 2025-06-03 16:03:42.426584 | orchestrator | 6e15a3f62b6a registry.osism.tech/kolla/barbican-api:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) barbican_api 2025-06-03 16:03:42.426595 | orchestrator | b3a2a89b0dc1 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 17 minutes ago Up 17 minutes ceph-mgr-testbed-node-0 2025-06-03 16:03:42.426606 | orchestrator | 0a7a03c6b980 registry.osism.tech/kolla/keystone:2024.2 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) keystone 2025-06-03 16:03:42.426628 | orchestrator | cbacd71c1c53 registry.osism.tech/kolla/keystone-fernet:2024.2 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) keystone_fernet 2025-06-03 16:03:42.426640 | orchestrator | 3b9e76f54d9a registry.osism.tech/kolla/keystone-ssh:2024.2 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) keystone_ssh 2025-06-03 16:03:42.426651 | orchestrator | 53e40977888e registry.osism.tech/kolla/horizon:2024.2 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) horizon 2025-06-03 16:03:42.426662 | orchestrator | 8f2816a94b45 registry.osism.tech/kolla/mariadb-server:2024.2 "dumb-init -- kolla_…" 22 minutes ago Up 22 minutes (healthy) mariadb 2025-06-03 16:03:42.426685 | orchestrator | 6b1dd5e9fb79 registry.osism.tech/kolla/opensearch-dashboards:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) opensearch_dashboards 2025-06-03 16:03:42.426705 | orchestrator | bfaa54e2ef57 registry.osism.tech/kolla/opensearch:2024.2 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) opensearch 2025-06-03 16:03:42.426731 | orchestrator | 120d6a511cef registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 24 minutes ago Up 24 minutes ceph-crash-testbed-node-0 2025-06-03 16:03:42.426753 | orchestrator | 244afac106aa registry.osism.tech/kolla/keepalived:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes keepalived 2025-06-03 16:03:42.426771 | orchestrator | 9dc90a37c255 registry.osism.tech/kolla/proxysql:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) proxysql 2025-06-03 16:03:42.426790 | orchestrator | 5d8509476f6a registry.osism.tech/kolla/haproxy:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) haproxy 2025-06-03 16:03:42.426808 | orchestrator | 4cb90ac9b805 registry.osism.tech/kolla/ovn-northd:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes ovn_northd 2025-06-03 16:03:42.426824 | orchestrator | 298dbc7b0105 registry.osism.tech/kolla/ovn-sb-db-server:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes ovn_sb_db 2025-06-03 16:03:42.426841 | orchestrator | 054c83d5d1c1 registry.osism.tech/kolla/ovn-nb-db-server:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes ovn_nb_db 2025-06-03 16:03:42.426898 | orchestrator | 78f60fa69af2 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 29 minutes ago Up 29 minutes ceph-mon-testbed-node-0 2025-06-03 16:03:42.426917 | orchestrator | 5dae24d8187e registry.osism.tech/kolla/ovn-controller:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes ovn_controller 2025-06-03 16:03:42.426935 | orchestrator | ae0b2781c396 registry.osism.tech/kolla/rabbitmq:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) rabbitmq 2025-06-03 16:03:42.426965 | orchestrator | 0bed4f7d07c4 registry.osism.tech/kolla/openvswitch-vswitchd:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) openvswitch_vswitchd 2025-06-03 16:03:42.426984 | orchestrator | 9cf3607dcf51 registry.osism.tech/kolla/openvswitch-db-server:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) openvswitch_db 2025-06-03 16:03:42.427003 | orchestrator | fc5a13fe9c4d registry.osism.tech/kolla/redis-sentinel:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) redis_sentinel 2025-06-03 16:03:42.427020 | orchestrator | fc14e5c9f120 registry.osism.tech/kolla/redis:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) redis 2025-06-03 16:03:42.427031 | orchestrator | c53a1ae0aa10 registry.osism.tech/kolla/memcached:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) memcached 2025-06-03 16:03:42.427042 | orchestrator | cd120805bbcd registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes cron 2025-06-03 16:03:42.427053 | orchestrator | 38482878ef0a registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 32 minutes ago Up 32 minutes kolla_toolbox 2025-06-03 16:03:42.427064 | orchestrator | 29823d07e8cc registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 33 minutes ago Up 33 minutes fluentd 2025-06-03 16:03:42.666480 | orchestrator | 2025-06-03 16:03:42.666592 | orchestrator | ## Images @ testbed-node-0 2025-06-03 16:03:42.666609 | orchestrator | 2025-06-03 16:03:42.666620 | orchestrator | + echo 2025-06-03 16:03:42.666632 | orchestrator | + echo '## Images @ testbed-node-0' 2025-06-03 16:03:42.666645 | orchestrator | + echo 2025-06-03 16:03:42.666656 | orchestrator | + osism container testbed-node-0 images 2025-06-03 16:03:44.861033 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-06-03 16:03:44.861146 | orchestrator | registry.osism.tech/kolla/keepalived 2024.2 634f31a04325 7 hours ago 330MB 2025-06-03 16:03:44.861184 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 0fdc644c8234 7 hours ago 747MB 2025-06-03 16:03:44.861197 | orchestrator | registry.osism.tech/kolla/rabbitmq 2024.2 c22b3d564931 7 hours ago 376MB 2025-06-03 16:03:44.861208 | orchestrator | registry.osism.tech/kolla/grafana 2024.2 3376d3a7a1c7 7 hours ago 1.01GB 2025-06-03 16:03:44.861220 | orchestrator | registry.osism.tech/kolla/opensearch 2024.2 63c10f4242b0 7 hours ago 1.59GB 2025-06-03 16:03:44.861231 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2024.2 97b2835a35a8 7 hours ago 1.55GB 2025-06-03 16:03:44.861242 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 dec51591a0e8 7 hours ago 629MB 2025-06-03 16:03:44.861253 | orchestrator | registry.osism.tech/kolla/memcached 2024.2 fa4e375f6261 7 hours ago 319MB 2025-06-03 16:03:44.861264 | orchestrator | registry.osism.tech/kolla/proxysql 2024.2 c7a6dc5a9b00 7 hours ago 419MB 2025-06-03 16:03:44.861292 | orchestrator | registry.osism.tech/kolla/haproxy 2024.2 4330adb17ac9 7 hours ago 327MB 2025-06-03 16:03:44.861304 | orchestrator | registry.osism.tech/kolla/cron 2024.2 0f9cf6fe7555 7 hours ago 319MB 2025-06-03 16:03:44.861315 | orchestrator | registry.osism.tech/kolla/mariadb-server 2024.2 f4ddcd428b56 7 hours ago 591MB 2025-06-03 16:03:44.861326 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2024.2 22e26dab78da 7 hours ago 362MB 2025-06-03 16:03:44.861337 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2024.2 cbb28c9d8883 7 hours ago 362MB 2025-06-03 16:03:44.861348 | orchestrator | registry.osism.tech/kolla/horizon 2024.2 41c46a08e659 7 hours ago 1.21GB 2025-06-03 16:03:44.861359 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 644b1b5e2c11 7 hours ago 359MB 2025-06-03 16:03:44.861371 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2024.2 2ec55a760f26 7 hours ago 345MB 2025-06-03 16:03:44.861382 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2024.2 2c335042013f 7 hours ago 352MB 2025-06-03 16:03:44.861394 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 b1843478586a 7 hours ago 411MB 2025-06-03 16:03:44.861405 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2024.2 29302ad2830d 7 hours ago 354MB 2025-06-03 16:03:44.861416 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2024.2 a58bb5190322 7 hours ago 325MB 2025-06-03 16:03:44.861427 | orchestrator | registry.osism.tech/kolla/redis 2024.2 b048cb752f3b 7 hours ago 326MB 2025-06-03 16:03:44.861438 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2024.2 b3679ee6b1b5 7 hours ago 947MB 2025-06-03 16:03:44.861459 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2024.2 e46b3e1dbd91 7 hours ago 947MB 2025-06-03 16:03:44.861481 | orchestrator | registry.osism.tech/kolla/ovn-northd 2024.2 c7347a37636b 7 hours ago 948MB 2025-06-03 16:03:44.861492 | orchestrator | registry.osism.tech/kolla/ovn-controller 2024.2 9414e264063d 7 hours ago 948MB 2025-06-03 16:03:44.861532 | orchestrator | registry.osism.tech/kolla/glance-api 2024.2 39da9b2b3d20 7 hours ago 1.15GB 2025-06-03 16:03:44.861545 | orchestrator | registry.osism.tech/kolla/neutron-server 2024.2 e0ae3d4fab8f 7 hours ago 1.25GB 2025-06-03 16:03:44.861556 | orchestrator | registry.osism.tech/kolla/keystone 2024.2 f3a2eecc9019 7 hours ago 1.13GB 2025-06-03 16:03:44.861567 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2024.2 598849ffc73a 7 hours ago 1.11GB 2025-06-03 16:03:44.861586 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2024.2 6d7a75eb8a59 7 hours ago 1.11GB 2025-06-03 16:03:44.861614 | orchestrator | registry.osism.tech/kolla/magnum-api 2024.2 03d00cc13400 7 hours ago 1.2GB 2025-06-03 16:03:44.861635 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2024.2 5020d0e4252c 7 hours ago 1.31GB 2025-06-03 16:03:44.861652 | orchestrator | registry.osism.tech/kolla/skyline-apiserver 2024.2 ee087ad0c495 7 hours ago 1.11GB 2025-06-03 16:03:44.861672 | orchestrator | registry.osism.tech/kolla/skyline-console 2024.2 63dd4c961e40 7 hours ago 1.12GB 2025-06-03 16:03:44.861689 | orchestrator | registry.osism.tech/kolla/aodh-evaluator 2024.2 6b86f0ff02b1 7 hours ago 1.04GB 2025-06-03 16:03:44.861740 | orchestrator | registry.osism.tech/kolla/aodh-notifier 2024.2 c0e27efee296 7 hours ago 1.04GB 2025-06-03 16:03:44.861760 | orchestrator | registry.osism.tech/kolla/aodh-api 2024.2 1116908349e8 7 hours ago 1.04GB 2025-06-03 16:03:44.861779 | orchestrator | registry.osism.tech/kolla/aodh-listener 2024.2 378cd14c6cb2 7 hours ago 1.04GB 2025-06-03 16:03:44.861799 | orchestrator | registry.osism.tech/kolla/designate-mdns 2024.2 d333c48c38c1 7 hours ago 1.05GB 2025-06-03 16:03:44.861817 | orchestrator | registry.osism.tech/kolla/designate-central 2024.2 689c02a1f7cb 7 hours ago 1.05GB 2025-06-03 16:03:44.861837 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2024.2 4d6977a4610d 7 hours ago 1.06GB 2025-06-03 16:03:44.861883 | orchestrator | registry.osism.tech/kolla/designate-worker 2024.2 aad8c5e6844b 7 hours ago 1.06GB 2025-06-03 16:03:44.861903 | orchestrator | registry.osism.tech/kolla/designate-producer 2024.2 8247bb90135e 7 hours ago 1.05GB 2025-06-03 16:03:44.861921 | orchestrator | registry.osism.tech/kolla/designate-api 2024.2 d1a0f527c167 7 hours ago 1.05GB 2025-06-03 16:03:44.861940 | orchestrator | registry.osism.tech/kolla/ceilometer-notification 2024.2 20bdbfaf97c4 7 hours ago 1.04GB 2025-06-03 16:03:44.861959 | orchestrator | registry.osism.tech/kolla/ceilometer-central 2024.2 05f6d63774c5 7 hours ago 1.04GB 2025-06-03 16:03:44.861979 | orchestrator | registry.osism.tech/kolla/nova-api 2024.2 f3cb1847373e 7 hours ago 1.29GB 2025-06-03 16:03:44.861998 | orchestrator | registry.osism.tech/kolla/nova-conductor 2024.2 9ffe494651e6 7 hours ago 1.29GB 2025-06-03 16:03:44.862069 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2024.2 99768b90d9f9 7 hours ago 1.42GB 2025-06-03 16:03:44.862090 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2024.2 da1dffd27d6e 7 hours ago 1.3GB 2025-06-03 16:03:44.862107 | orchestrator | registry.osism.tech/kolla/octavia-driver-agent 2024.2 6d890cd220b8 7 hours ago 1.12GB 2025-06-03 16:03:44.862125 | orchestrator | registry.osism.tech/kolla/octavia-health-manager 2024.2 bb4004650b9d 7 hours ago 1.1GB 2025-06-03 16:03:44.862144 | orchestrator | registry.osism.tech/kolla/octavia-housekeeping 2024.2 d182f98b391f 7 hours ago 1.1GB 2025-06-03 16:03:44.862172 | orchestrator | registry.osism.tech/kolla/octavia-worker 2024.2 983aaea3073f 7 hours ago 1.1GB 2025-06-03 16:03:44.862205 | orchestrator | registry.osism.tech/kolla/octavia-api 2024.2 972a139ec66a 7 hours ago 1.12GB 2025-06-03 16:03:44.862223 | orchestrator | registry.osism.tech/kolla/barbican-api 2024.2 3ec47b84743f 7 hours ago 1.06GB 2025-06-03 16:03:44.862241 | orchestrator | registry.osism.tech/kolla/barbican-worker 2024.2 ea72918ad24f 7 hours ago 1.06GB 2025-06-03 16:03:44.862261 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2024.2 c9601856b035 7 hours ago 1.06GB 2025-06-03 16:03:44.862273 | orchestrator | registry.osism.tech/kolla/cinder-api 2024.2 c19454612908 7 hours ago 1.41GB 2025-06-03 16:03:44.862284 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2024.2 e4c6325b4404 7 hours ago 1.41GB 2025-06-03 16:03:44.862301 | orchestrator | registry.osism.tech/kolla/placement-api 2024.2 1be19cbf953d 7 hours ago 1.04GB 2025-06-03 16:03:44.862319 | orchestrator | registry.osism.tech/osism/ceph-daemon reef d62d4cf5c710 13 hours ago 1.27GB 2025-06-03 16:03:45.150082 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-06-03 16:03:45.150931 | orchestrator | ++ semver latest 5.0.0 2025-06-03 16:03:45.197210 | orchestrator | 2025-06-03 16:03:45.197323 | orchestrator | ## Containers @ testbed-node-1 2025-06-03 16:03:45.197338 | orchestrator | 2025-06-03 16:03:45.197351 | orchestrator | + [[ -1 -eq -1 ]] 2025-06-03 16:03:45.197363 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-06-03 16:03:45.197374 | orchestrator | + echo 2025-06-03 16:03:45.197386 | orchestrator | + echo '## Containers @ testbed-node-1' 2025-06-03 16:03:45.197398 | orchestrator | + echo 2025-06-03 16:03:45.197410 | orchestrator | + osism container testbed-node-1 ps 2025-06-03 16:03:47.312254 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-06-03 16:03:47.312349 | orchestrator | 5f88434270dc registry.osism.tech/kolla/nova-novncproxy:2024.2 "dumb-init --single-…" 6 minutes ago Up 6 minutes (healthy) nova_novncproxy 2025-06-03 16:03:47.312374 | orchestrator | 6669ff50b2ce registry.osism.tech/kolla/nova-conductor:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) nova_conductor 2025-06-03 16:03:47.312387 | orchestrator | dcfa5df826d8 registry.osism.tech/kolla/grafana:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes grafana 2025-06-03 16:03:47.312401 | orchestrator | abc622c5315e registry.osism.tech/kolla/nova-api:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) nova_api 2025-06-03 16:03:47.312413 | orchestrator | c4dc8bcd9683 registry.osism.tech/kolla/nova-scheduler:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) nova_scheduler 2025-06-03 16:03:47.312426 | orchestrator | 494b568bac13 registry.osism.tech/kolla/glance-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) glance_api 2025-06-03 16:03:47.312438 | orchestrator | 2fb94b701c1f registry.osism.tech/kolla/cinder-scheduler:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) cinder_scheduler 2025-06-03 16:03:47.312451 | orchestrator | 89cd7a7f886e registry.osism.tech/kolla/cinder-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) cinder_api 2025-06-03 16:03:47.312460 | orchestrator | 208bc41bde68 registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_elasticsearch_exporter 2025-06-03 16:03:47.312469 | orchestrator | de5cfc23a9ce registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_cadvisor 2025-06-03 16:03:47.312482 | orchestrator | 3cf0f0802a00 registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_memcached_exporter 2025-06-03 16:03:47.312521 | orchestrator | 977c64fb639c registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_mysqld_exporter 2025-06-03 16:03:47.312533 | orchestrator | b3247ba8c619 registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_node_exporter 2025-06-03 16:03:47.312544 | orchestrator | 8873336f4dae registry.osism.tech/kolla/magnum-conductor:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) magnum_conductor 2025-06-03 16:03:47.312554 | orchestrator | a61c18a3d634 registry.osism.tech/kolla/magnum-api:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) magnum_api 2025-06-03 16:03:47.312566 | orchestrator | 5b13e0b162d4 registry.osism.tech/kolla/neutron-server:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) neutron_server 2025-06-03 16:03:47.312577 | orchestrator | fbfeef6c0f5b registry.osism.tech/kolla/placement-api:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) placement_api 2025-06-03 16:03:47.312588 | orchestrator | 677b7eb565a2 registry.osism.tech/kolla/designate-worker:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) designate_worker 2025-06-03 16:03:47.312599 | orchestrator | 5d422e379fd7 registry.osism.tech/kolla/designate-mdns:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) designate_mdns 2025-06-03 16:03:47.312611 | orchestrator | 7078aeb2b2e8 registry.osism.tech/kolla/designate-producer:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) designate_producer 2025-06-03 16:03:47.312626 | orchestrator | c8804be118c3 registry.osism.tech/kolla/designate-central:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) designate_central 2025-06-03 16:03:47.312656 | orchestrator | 8504372887b3 registry.osism.tech/kolla/designate-api:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) designate_api 2025-06-03 16:03:47.312668 | orchestrator | d47ddf4fc82c registry.osism.tech/kolla/designate-backend-bind9:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) designate_backend_bind9 2025-06-03 16:03:47.312680 | orchestrator | 623b9bfdb7e2 registry.osism.tech/kolla/barbican-worker:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) barbican_worker 2025-06-03 16:03:47.312691 | orchestrator | e0da58f9270a registry.osism.tech/kolla/barbican-keystone-listener:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) barbican_keystone_listener 2025-06-03 16:03:47.312705 | orchestrator | c10c4f7bc6b1 registry.osism.tech/kolla/barbican-api:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) barbican_api 2025-06-03 16:03:47.312719 | orchestrator | ecfab1bfcebc registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 17 minutes ago Up 17 minutes ceph-mgr-testbed-node-1 2025-06-03 16:03:47.312730 | orchestrator | c0e10c10838f registry.osism.tech/kolla/keystone:2024.2 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) keystone 2025-06-03 16:03:47.312743 | orchestrator | b60944fbb170 registry.osism.tech/kolla/keystone-fernet:2024.2 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) keystone_fernet 2025-06-03 16:03:47.312756 | orchestrator | f986610afb2a registry.osism.tech/kolla/horizon:2024.2 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) horizon 2025-06-03 16:03:47.312778 | orchestrator | b8169ca16a44 registry.osism.tech/kolla/keystone-ssh:2024.2 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) keystone_ssh 2025-06-03 16:03:47.312791 | orchestrator | d32691aba890 registry.osism.tech/kolla/opensearch-dashboards:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) opensearch_dashboards 2025-06-03 16:03:47.312815 | orchestrator | 09af9fa572b9 registry.osism.tech/kolla/opensearch:2024.2 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) opensearch 2025-06-03 16:03:47.312824 | orchestrator | 0ea3f7d12811 registry.osism.tech/kolla/mariadb-server:2024.2 "dumb-init -- kolla_…" 23 minutes ago Up 23 minutes (healthy) mariadb 2025-06-03 16:03:47.312833 | orchestrator | e0377484d7eb registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 24 minutes ago Up 24 minutes ceph-crash-testbed-node-1 2025-06-03 16:03:47.312842 | orchestrator | 331da333d173 registry.osism.tech/kolla/keepalived:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes keepalived 2025-06-03 16:03:47.312851 | orchestrator | e9b842ef88d4 registry.osism.tech/kolla/proxysql:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) proxysql 2025-06-03 16:03:47.312941 | orchestrator | a87a306dfe27 registry.osism.tech/kolla/haproxy:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) haproxy 2025-06-03 16:03:47.312953 | orchestrator | a4d9409ac58a registry.osism.tech/kolla/ovn-northd:2024.2 "dumb-init --single-…" 28 minutes ago Up 27 minutes ovn_northd 2025-06-03 16:03:47.312962 | orchestrator | 0a5bed329bf4 registry.osism.tech/kolla/ovn-sb-db-server:2024.2 "dumb-init --single-…" 28 minutes ago Up 27 minutes ovn_sb_db 2025-06-03 16:03:47.312971 | orchestrator | 86fd22074c70 registry.osism.tech/kolla/ovn-nb-db-server:2024.2 "dumb-init --single-…" 28 minutes ago Up 27 minutes ovn_nb_db 2025-06-03 16:03:47.312980 | orchestrator | e834c255998d registry.osism.tech/kolla/ovn-controller:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes ovn_controller 2025-06-03 16:03:47.312989 | orchestrator | a164c27a650a registry.osism.tech/kolla/rabbitmq:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) rabbitmq 2025-06-03 16:03:47.312998 | orchestrator | b834b7a5809a registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 29 minutes ago Up 29 minutes ceph-mon-testbed-node-1 2025-06-03 16:03:47.313015 | orchestrator | 8ccafadede25 registry.osism.tech/kolla/openvswitch-vswitchd:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) openvswitch_vswitchd 2025-06-03 16:03:47.313024 | orchestrator | 181834aed628 registry.osism.tech/kolla/openvswitch-db-server:2024.2 "dumb-init --single-…" 31 minutes ago Up 30 minutes (healthy) openvswitch_db 2025-06-03 16:03:47.313033 | orchestrator | 25d6bf6d5afb registry.osism.tech/kolla/redis-sentinel:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) redis_sentinel 2025-06-03 16:03:47.313042 | orchestrator | aeec11938450 registry.osism.tech/kolla/redis:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) redis 2025-06-03 16:03:47.313050 | orchestrator | 5d442e6f177e registry.osism.tech/kolla/memcached:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) memcached 2025-06-03 16:03:47.313063 | orchestrator | d44e29ae0e77 registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes cron 2025-06-03 16:03:47.313091 | orchestrator | 4365c5b29211 registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes kolla_toolbox 2025-06-03 16:03:47.313107 | orchestrator | b40e795d548b registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 32 minutes ago Up 32 minutes fluentd 2025-06-03 16:03:47.562356 | orchestrator | 2025-06-03 16:03:47.562481 | orchestrator | ## Images @ testbed-node-1 2025-06-03 16:03:47.562500 | orchestrator | 2025-06-03 16:03:47.562519 | orchestrator | + echo 2025-06-03 16:03:47.562538 | orchestrator | + echo '## Images @ testbed-node-1' 2025-06-03 16:03:47.562558 | orchestrator | + echo 2025-06-03 16:03:47.562576 | orchestrator | + osism container testbed-node-1 images 2025-06-03 16:03:49.645975 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-06-03 16:03:49.646218 | orchestrator | registry.osism.tech/kolla/keepalived 2024.2 634f31a04325 7 hours ago 330MB 2025-06-03 16:03:49.646247 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 0fdc644c8234 7 hours ago 747MB 2025-06-03 16:03:49.646265 | orchestrator | registry.osism.tech/kolla/rabbitmq 2024.2 c22b3d564931 7 hours ago 376MB 2025-06-03 16:03:49.646283 | orchestrator | registry.osism.tech/kolla/grafana 2024.2 3376d3a7a1c7 7 hours ago 1.01GB 2025-06-03 16:03:49.646300 | orchestrator | registry.osism.tech/kolla/opensearch 2024.2 63c10f4242b0 7 hours ago 1.59GB 2025-06-03 16:03:49.646318 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2024.2 97b2835a35a8 7 hours ago 1.55GB 2025-06-03 16:03:49.646360 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 dec51591a0e8 7 hours ago 629MB 2025-06-03 16:03:49.646382 | orchestrator | registry.osism.tech/kolla/memcached 2024.2 fa4e375f6261 7 hours ago 319MB 2025-06-03 16:03:49.646398 | orchestrator | registry.osism.tech/kolla/proxysql 2024.2 c7a6dc5a9b00 7 hours ago 419MB 2025-06-03 16:03:49.646417 | orchestrator | registry.osism.tech/kolla/haproxy 2024.2 4330adb17ac9 7 hours ago 327MB 2025-06-03 16:03:49.646436 | orchestrator | registry.osism.tech/kolla/cron 2024.2 0f9cf6fe7555 7 hours ago 319MB 2025-06-03 16:03:49.646454 | orchestrator | registry.osism.tech/kolla/mariadb-server 2024.2 f4ddcd428b56 7 hours ago 591MB 2025-06-03 16:03:49.646473 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2024.2 22e26dab78da 7 hours ago 362MB 2025-06-03 16:03:49.646492 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2024.2 cbb28c9d8883 7 hours ago 362MB 2025-06-03 16:03:49.646512 | orchestrator | registry.osism.tech/kolla/horizon 2024.2 41c46a08e659 7 hours ago 1.21GB 2025-06-03 16:03:49.646571 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 644b1b5e2c11 7 hours ago 359MB 2025-06-03 16:03:49.646595 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2024.2 2ec55a760f26 7 hours ago 345MB 2025-06-03 16:03:49.646618 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2024.2 2c335042013f 7 hours ago 352MB 2025-06-03 16:03:49.646637 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 b1843478586a 7 hours ago 411MB 2025-06-03 16:03:49.646656 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2024.2 29302ad2830d 7 hours ago 354MB 2025-06-03 16:03:49.646679 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2024.2 a58bb5190322 7 hours ago 325MB 2025-06-03 16:03:49.646700 | orchestrator | registry.osism.tech/kolla/redis 2024.2 b048cb752f3b 7 hours ago 326MB 2025-06-03 16:03:49.646719 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2024.2 b3679ee6b1b5 7 hours ago 947MB 2025-06-03 16:03:49.646738 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2024.2 e46b3e1dbd91 7 hours ago 947MB 2025-06-03 16:03:49.646789 | orchestrator | registry.osism.tech/kolla/ovn-controller 2024.2 9414e264063d 7 hours ago 948MB 2025-06-03 16:03:49.646809 | orchestrator | registry.osism.tech/kolla/ovn-northd 2024.2 c7347a37636b 7 hours ago 948MB 2025-06-03 16:03:49.646828 | orchestrator | registry.osism.tech/kolla/glance-api 2024.2 39da9b2b3d20 7 hours ago 1.15GB 2025-06-03 16:03:49.646846 | orchestrator | registry.osism.tech/kolla/neutron-server 2024.2 e0ae3d4fab8f 7 hours ago 1.25GB 2025-06-03 16:03:49.646895 | orchestrator | registry.osism.tech/kolla/keystone 2024.2 f3a2eecc9019 7 hours ago 1.13GB 2025-06-03 16:03:49.646917 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2024.2 598849ffc73a 7 hours ago 1.11GB 2025-06-03 16:03:49.646937 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2024.2 6d7a75eb8a59 7 hours ago 1.11GB 2025-06-03 16:03:49.646957 | orchestrator | registry.osism.tech/kolla/magnum-api 2024.2 03d00cc13400 7 hours ago 1.2GB 2025-06-03 16:03:49.646977 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2024.2 5020d0e4252c 7 hours ago 1.31GB 2025-06-03 16:03:49.646996 | orchestrator | registry.osism.tech/kolla/designate-mdns 2024.2 d333c48c38c1 7 hours ago 1.05GB 2025-06-03 16:03:49.647015 | orchestrator | registry.osism.tech/kolla/designate-central 2024.2 689c02a1f7cb 7 hours ago 1.05GB 2025-06-03 16:03:49.647033 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2024.2 4d6977a4610d 7 hours ago 1.06GB 2025-06-03 16:03:49.647082 | orchestrator | registry.osism.tech/kolla/designate-worker 2024.2 aad8c5e6844b 7 hours ago 1.06GB 2025-06-03 16:03:49.647102 | orchestrator | registry.osism.tech/kolla/designate-producer 2024.2 8247bb90135e 7 hours ago 1.05GB 2025-06-03 16:03:49.647120 | orchestrator | registry.osism.tech/kolla/designate-api 2024.2 d1a0f527c167 7 hours ago 1.05GB 2025-06-03 16:03:49.647140 | orchestrator | registry.osism.tech/kolla/nova-api 2024.2 f3cb1847373e 7 hours ago 1.29GB 2025-06-03 16:03:49.647159 | orchestrator | registry.osism.tech/kolla/nova-conductor 2024.2 9ffe494651e6 7 hours ago 1.29GB 2025-06-03 16:03:49.647179 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2024.2 99768b90d9f9 7 hours ago 1.42GB 2025-06-03 16:03:49.647198 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2024.2 da1dffd27d6e 7 hours ago 1.3GB 2025-06-03 16:03:49.647218 | orchestrator | registry.osism.tech/kolla/barbican-api 2024.2 3ec47b84743f 7 hours ago 1.06GB 2025-06-03 16:03:49.647238 | orchestrator | registry.osism.tech/kolla/barbican-worker 2024.2 ea72918ad24f 7 hours ago 1.06GB 2025-06-03 16:03:49.647258 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2024.2 c9601856b035 7 hours ago 1.06GB 2025-06-03 16:03:49.647275 | orchestrator | registry.osism.tech/kolla/cinder-api 2024.2 c19454612908 7 hours ago 1.41GB 2025-06-03 16:03:49.647294 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2024.2 e4c6325b4404 7 hours ago 1.41GB 2025-06-03 16:03:49.647312 | orchestrator | registry.osism.tech/kolla/placement-api 2024.2 1be19cbf953d 7 hours ago 1.04GB 2025-06-03 16:03:49.647330 | orchestrator | registry.osism.tech/osism/ceph-daemon reef d62d4cf5c710 13 hours ago 1.27GB 2025-06-03 16:03:49.886265 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-06-03 16:03:49.886359 | orchestrator | ++ semver latest 5.0.0 2025-06-03 16:03:49.948391 | orchestrator | 2025-06-03 16:03:49.948536 | orchestrator | ## Containers @ testbed-node-2 2025-06-03 16:03:49.948567 | orchestrator | 2025-06-03 16:03:49.948587 | orchestrator | + [[ -1 -eq -1 ]] 2025-06-03 16:03:49.948608 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-06-03 16:03:49.948620 | orchestrator | + echo 2025-06-03 16:03:49.948657 | orchestrator | + echo '## Containers @ testbed-node-2' 2025-06-03 16:03:49.948670 | orchestrator | + echo 2025-06-03 16:03:49.948681 | orchestrator | + osism container testbed-node-2 ps 2025-06-03 16:03:51.983014 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-06-03 16:03:51.983122 | orchestrator | 694f6953d6e2 registry.osism.tech/kolla/nova-novncproxy:2024.2 "dumb-init --single-…" 6 minutes ago Up 6 minutes (healthy) nova_novncproxy 2025-06-03 16:03:51.983137 | orchestrator | 34a2dd7a1e58 registry.osism.tech/kolla/nova-conductor:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) nova_conductor 2025-06-03 16:03:51.983148 | orchestrator | d31f06409070 registry.osism.tech/kolla/grafana:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes grafana 2025-06-03 16:03:51.983159 | orchestrator | 8fd57ab78368 registry.osism.tech/kolla/nova-api:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) nova_api 2025-06-03 16:03:51.983168 | orchestrator | e38eeccb3214 registry.osism.tech/kolla/nova-scheduler:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) nova_scheduler 2025-06-03 16:03:51.983178 | orchestrator | 1dc687470818 registry.osism.tech/kolla/glance-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) glance_api 2025-06-03 16:03:51.983188 | orchestrator | f9d4710cb1b9 registry.osism.tech/kolla/cinder-scheduler:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) cinder_scheduler 2025-06-03 16:03:51.983198 | orchestrator | 730472b883b1 registry.osism.tech/kolla/cinder-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) cinder_api 2025-06-03 16:03:51.983208 | orchestrator | 8d94455fb1b2 registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_elasticsearch_exporter 2025-06-03 16:03:51.983222 | orchestrator | d5945104b1b3 registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_cadvisor 2025-06-03 16:03:51.983239 | orchestrator | 822b50e1d1e3 registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_memcached_exporter 2025-06-03 16:03:51.983264 | orchestrator | 211974228aa0 registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_mysqld_exporter 2025-06-03 16:03:51.983281 | orchestrator | ac733aaf7b6a registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_node_exporter 2025-06-03 16:03:51.983296 | orchestrator | 41ec129789c3 registry.osism.tech/kolla/magnum-conductor:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) magnum_conductor 2025-06-03 16:03:51.983311 | orchestrator | 151d919472b4 registry.osism.tech/kolla/magnum-api:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) magnum_api 2025-06-03 16:03:51.983326 | orchestrator | 8c46a6e96a7f registry.osism.tech/kolla/neutron-server:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) neutron_server 2025-06-03 16:03:51.983342 | orchestrator | cf6aac55c22e registry.osism.tech/kolla/placement-api:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) placement_api 2025-06-03 16:03:51.983358 | orchestrator | 9b06c3da9b3d registry.osism.tech/kolla/designate-worker:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) designate_worker 2025-06-03 16:03:51.983402 | orchestrator | 4ef5c6930a3a registry.osism.tech/kolla/designate-mdns:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) designate_mdns 2025-06-03 16:03:51.983419 | orchestrator | c651e40c54cd registry.osism.tech/kolla/designate-producer:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) designate_producer 2025-06-03 16:03:51.983453 | orchestrator | 4beff30f9a0a registry.osism.tech/kolla/designate-central:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) designate_central 2025-06-03 16:03:51.983491 | orchestrator | e0edcaaffa58 registry.osism.tech/kolla/designate-api:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) designate_api 2025-06-03 16:03:51.983506 | orchestrator | aa8cb66a727a registry.osism.tech/kolla/designate-backend-bind9:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) designate_backend_bind9 2025-06-03 16:03:51.983521 | orchestrator | 66f0c5f04504 registry.osism.tech/kolla/barbican-worker:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) barbican_worker 2025-06-03 16:03:51.983536 | orchestrator | 0ed8868da6f9 registry.osism.tech/kolla/barbican-keystone-listener:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) barbican_keystone_listener 2025-06-03 16:03:51.983551 | orchestrator | 31a00aa50331 registry.osism.tech/kolla/barbican-api:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) barbican_api 2025-06-03 16:03:51.983568 | orchestrator | aa6ac63b2059 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 17 minutes ago Up 17 minutes ceph-mgr-testbed-node-2 2025-06-03 16:03:51.983605 | orchestrator | 906b6651b901 registry.osism.tech/kolla/keystone:2024.2 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) keystone 2025-06-03 16:03:51.983623 | orchestrator | a186cac8d88f registry.osism.tech/kolla/keystone-fernet:2024.2 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) keystone_fernet 2025-06-03 16:03:51.983658 | orchestrator | 7aec04017cba registry.osism.tech/kolla/horizon:2024.2 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) horizon 2025-06-03 16:03:51.983676 | orchestrator | 9ab72875002e registry.osism.tech/kolla/keystone-ssh:2024.2 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) keystone_ssh 2025-06-03 16:03:51.983689 | orchestrator | 16d409d833b5 registry.osism.tech/kolla/opensearch-dashboards:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) opensearch_dashboards 2025-06-03 16:03:51.983700 | orchestrator | 144dffeac09b registry.osism.tech/kolla/mariadb-server:2024.2 "dumb-init -- kolla_…" 23 minutes ago Up 23 minutes (healthy) mariadb 2025-06-03 16:03:51.983712 | orchestrator | 89565f66f037 registry.osism.tech/kolla/opensearch:2024.2 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) opensearch 2025-06-03 16:03:51.983734 | orchestrator | fcee5abe116c registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 25 minutes ago Up 25 minutes ceph-crash-testbed-node-2 2025-06-03 16:03:51.983746 | orchestrator | d30116516e25 registry.osism.tech/kolla/keepalived:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes keepalived 2025-06-03 16:03:51.983758 | orchestrator | 135d741b5763 registry.osism.tech/kolla/proxysql:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) proxysql 2025-06-03 16:03:51.983769 | orchestrator | 1ca8c52e466e registry.osism.tech/kolla/haproxy:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) haproxy 2025-06-03 16:03:51.983795 | orchestrator | 8a560ce90a3b registry.osism.tech/kolla/ovn-northd:2024.2 "dumb-init --single-…" 28 minutes ago Up 27 minutes ovn_northd 2025-06-03 16:03:51.983808 | orchestrator | b37260631cc8 registry.osism.tech/kolla/ovn-sb-db-server:2024.2 "dumb-init --single-…" 28 minutes ago Up 27 minutes ovn_sb_db 2025-06-03 16:03:51.983820 | orchestrator | 9406fe4efd3e registry.osism.tech/kolla/ovn-nb-db-server:2024.2 "dumb-init --single-…" 28 minutes ago Up 27 minutes ovn_nb_db 2025-06-03 16:03:51.983831 | orchestrator | 4229a2e5fc89 registry.osism.tech/kolla/ovn-controller:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes ovn_controller 2025-06-03 16:03:51.983843 | orchestrator | c9a7e5b02550 registry.osism.tech/kolla/rabbitmq:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) rabbitmq 2025-06-03 16:03:51.983854 | orchestrator | 673a9bcd3b50 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 29 minutes ago Up 29 minutes ceph-mon-testbed-node-2 2025-06-03 16:03:51.983896 | orchestrator | 832df75f9f16 registry.osism.tech/kolla/openvswitch-vswitchd:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) openvswitch_vswitchd 2025-06-03 16:03:51.983908 | orchestrator | 1f00cca99c88 registry.osism.tech/kolla/openvswitch-db-server:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) openvswitch_db 2025-06-03 16:03:51.983917 | orchestrator | a21c44badcbe registry.osism.tech/kolla/redis-sentinel:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) redis_sentinel 2025-06-03 16:03:51.983927 | orchestrator | 4d4fabd32435 registry.osism.tech/kolla/redis:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) redis 2025-06-03 16:03:51.983937 | orchestrator | 7a8971692066 registry.osism.tech/kolla/memcached:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) memcached 2025-06-03 16:03:51.983946 | orchestrator | a2e9bc8aa9bc registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes cron 2025-06-03 16:03:51.983956 | orchestrator | ebd46a0f93af registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes kolla_toolbox 2025-06-03 16:03:51.983966 | orchestrator | 62c92a8d9c19 registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 32 minutes ago Up 32 minutes fluentd 2025-06-03 16:03:52.227793 | orchestrator | 2025-06-03 16:03:52.227974 | orchestrator | ## Images @ testbed-node-2 2025-06-03 16:03:52.227995 | orchestrator | 2025-06-03 16:03:52.228009 | orchestrator | + echo 2025-06-03 16:03:52.228021 | orchestrator | + echo '## Images @ testbed-node-2' 2025-06-03 16:03:52.228034 | orchestrator | + echo 2025-06-03 16:03:52.228046 | orchestrator | + osism container testbed-node-2 images 2025-06-03 16:03:54.355822 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-06-03 16:03:54.355916 | orchestrator | registry.osism.tech/kolla/keepalived 2024.2 634f31a04325 7 hours ago 330MB 2025-06-03 16:03:54.355923 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 0fdc644c8234 7 hours ago 747MB 2025-06-03 16:03:54.355928 | orchestrator | registry.osism.tech/kolla/rabbitmq 2024.2 c22b3d564931 7 hours ago 376MB 2025-06-03 16:03:54.355932 | orchestrator | registry.osism.tech/kolla/grafana 2024.2 3376d3a7a1c7 7 hours ago 1.01GB 2025-06-03 16:03:54.355936 | orchestrator | registry.osism.tech/kolla/opensearch 2024.2 63c10f4242b0 7 hours ago 1.59GB 2025-06-03 16:03:54.355957 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2024.2 97b2835a35a8 7 hours ago 1.55GB 2025-06-03 16:03:54.355961 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 dec51591a0e8 7 hours ago 629MB 2025-06-03 16:03:54.355965 | orchestrator | registry.osism.tech/kolla/memcached 2024.2 fa4e375f6261 7 hours ago 319MB 2025-06-03 16:03:54.355969 | orchestrator | registry.osism.tech/kolla/proxysql 2024.2 c7a6dc5a9b00 7 hours ago 419MB 2025-06-03 16:03:54.355973 | orchestrator | registry.osism.tech/kolla/haproxy 2024.2 4330adb17ac9 7 hours ago 327MB 2025-06-03 16:03:54.355977 | orchestrator | registry.osism.tech/kolla/cron 2024.2 0f9cf6fe7555 7 hours ago 319MB 2025-06-03 16:03:54.355981 | orchestrator | registry.osism.tech/kolla/mariadb-server 2024.2 f4ddcd428b56 7 hours ago 591MB 2025-06-03 16:03:54.355985 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2024.2 22e26dab78da 7 hours ago 362MB 2025-06-03 16:03:54.355989 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2024.2 cbb28c9d8883 7 hours ago 362MB 2025-06-03 16:03:54.356005 | orchestrator | registry.osism.tech/kolla/horizon 2024.2 41c46a08e659 7 hours ago 1.21GB 2025-06-03 16:03:54.356009 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 644b1b5e2c11 7 hours ago 359MB 2025-06-03 16:03:54.356013 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2024.2 2ec55a760f26 7 hours ago 345MB 2025-06-03 16:03:54.356017 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2024.2 2c335042013f 7 hours ago 352MB 2025-06-03 16:03:54.356021 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 b1843478586a 7 hours ago 411MB 2025-06-03 16:03:54.356025 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2024.2 29302ad2830d 7 hours ago 354MB 2025-06-03 16:03:54.356028 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2024.2 a58bb5190322 7 hours ago 325MB 2025-06-03 16:03:54.356032 | orchestrator | registry.osism.tech/kolla/redis 2024.2 b048cb752f3b 7 hours ago 326MB 2025-06-03 16:03:54.356036 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2024.2 b3679ee6b1b5 7 hours ago 947MB 2025-06-03 16:03:54.356040 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2024.2 e46b3e1dbd91 7 hours ago 947MB 2025-06-03 16:03:54.356043 | orchestrator | registry.osism.tech/kolla/ovn-northd 2024.2 c7347a37636b 7 hours ago 948MB 2025-06-03 16:03:54.356047 | orchestrator | registry.osism.tech/kolla/ovn-controller 2024.2 9414e264063d 7 hours ago 948MB 2025-06-03 16:03:54.356051 | orchestrator | registry.osism.tech/kolla/glance-api 2024.2 39da9b2b3d20 7 hours ago 1.15GB 2025-06-03 16:03:54.356055 | orchestrator | registry.osism.tech/kolla/neutron-server 2024.2 e0ae3d4fab8f 7 hours ago 1.25GB 2025-06-03 16:03:54.356058 | orchestrator | registry.osism.tech/kolla/keystone 2024.2 f3a2eecc9019 7 hours ago 1.13GB 2025-06-03 16:03:54.356062 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2024.2 598849ffc73a 7 hours ago 1.11GB 2025-06-03 16:03:54.356066 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2024.2 6d7a75eb8a59 7 hours ago 1.11GB 2025-06-03 16:03:54.356070 | orchestrator | registry.osism.tech/kolla/magnum-api 2024.2 03d00cc13400 7 hours ago 1.2GB 2025-06-03 16:03:54.356073 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2024.2 5020d0e4252c 7 hours ago 1.31GB 2025-06-03 16:03:54.356077 | orchestrator | registry.osism.tech/kolla/designate-mdns 2024.2 d333c48c38c1 7 hours ago 1.05GB 2025-06-03 16:03:54.356081 | orchestrator | registry.osism.tech/kolla/designate-central 2024.2 689c02a1f7cb 7 hours ago 1.05GB 2025-06-03 16:03:54.356088 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2024.2 4d6977a4610d 7 hours ago 1.06GB 2025-06-03 16:03:54.356102 | orchestrator | registry.osism.tech/kolla/designate-worker 2024.2 aad8c5e6844b 7 hours ago 1.06GB 2025-06-03 16:03:54.356106 | orchestrator | registry.osism.tech/kolla/designate-producer 2024.2 8247bb90135e 7 hours ago 1.05GB 2025-06-03 16:03:54.356110 | orchestrator | registry.osism.tech/kolla/designate-api 2024.2 d1a0f527c167 7 hours ago 1.05GB 2025-06-03 16:03:54.356114 | orchestrator | registry.osism.tech/kolla/nova-api 2024.2 f3cb1847373e 7 hours ago 1.29GB 2025-06-03 16:03:54.356118 | orchestrator | registry.osism.tech/kolla/nova-conductor 2024.2 9ffe494651e6 7 hours ago 1.29GB 2025-06-03 16:03:54.356121 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2024.2 99768b90d9f9 7 hours ago 1.42GB 2025-06-03 16:03:54.356125 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2024.2 da1dffd27d6e 7 hours ago 1.3GB 2025-06-03 16:03:54.356131 | orchestrator | registry.osism.tech/kolla/barbican-api 2024.2 3ec47b84743f 7 hours ago 1.06GB 2025-06-03 16:03:54.356137 | orchestrator | registry.osism.tech/kolla/barbican-worker 2024.2 ea72918ad24f 7 hours ago 1.06GB 2025-06-03 16:03:54.356143 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2024.2 c9601856b035 7 hours ago 1.06GB 2025-06-03 16:03:54.356150 | orchestrator | registry.osism.tech/kolla/cinder-api 2024.2 c19454612908 7 hours ago 1.41GB 2025-06-03 16:03:54.356156 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2024.2 e4c6325b4404 7 hours ago 1.41GB 2025-06-03 16:03:54.356162 | orchestrator | registry.osism.tech/kolla/placement-api 2024.2 1be19cbf953d 7 hours ago 1.04GB 2025-06-03 16:03:54.356168 | orchestrator | registry.osism.tech/osism/ceph-daemon reef d62d4cf5c710 13 hours ago 1.27GB 2025-06-03 16:03:54.584994 | orchestrator | + sh -c /opt/configuration/scripts/check-services.sh 2025-06-03 16:03:54.590207 | orchestrator | + set -e 2025-06-03 16:03:54.590314 | orchestrator | + source /opt/manager-vars.sh 2025-06-03 16:03:54.591563 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-06-03 16:03:54.591653 | orchestrator | ++ NUMBER_OF_NODES=6 2025-06-03 16:03:54.591666 | orchestrator | ++ export CEPH_VERSION=reef 2025-06-03 16:03:54.591677 | orchestrator | ++ CEPH_VERSION=reef 2025-06-03 16:03:54.591687 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-06-03 16:03:54.591698 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-06-03 16:03:54.591708 | orchestrator | ++ export MANAGER_VERSION=latest 2025-06-03 16:03:54.591718 | orchestrator | ++ MANAGER_VERSION=latest 2025-06-03 16:03:54.591728 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-06-03 16:03:54.591738 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-06-03 16:03:54.591748 | orchestrator | ++ export ARA=false 2025-06-03 16:03:54.591758 | orchestrator | ++ ARA=false 2025-06-03 16:03:54.591768 | orchestrator | ++ export DEPLOY_MODE=manager 2025-06-03 16:03:54.591778 | orchestrator | ++ DEPLOY_MODE=manager 2025-06-03 16:03:54.591788 | orchestrator | ++ export TEMPEST=false 2025-06-03 16:03:54.591797 | orchestrator | ++ TEMPEST=false 2025-06-03 16:03:54.591807 | orchestrator | ++ export IS_ZUUL=true 2025-06-03 16:03:54.591816 | orchestrator | ++ IS_ZUUL=true 2025-06-03 16:03:54.591826 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.16 2025-06-03 16:03:54.591836 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.16 2025-06-03 16:03:54.591845 | orchestrator | ++ export EXTERNAL_API=false 2025-06-03 16:03:54.591855 | orchestrator | ++ EXTERNAL_API=false 2025-06-03 16:03:54.591865 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-06-03 16:03:54.591923 | orchestrator | ++ IMAGE_USER=ubuntu 2025-06-03 16:03:54.591936 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-06-03 16:03:54.591946 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-06-03 16:03:54.591955 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-06-03 16:03:54.591965 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-06-03 16:03:54.591975 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-06-03 16:03:54.591985 | orchestrator | + sh -c /opt/configuration/scripts/check/100-ceph-with-ansible.sh 2025-06-03 16:03:54.602950 | orchestrator | + set -e 2025-06-03 16:03:54.603075 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-06-03 16:03:54.603090 | orchestrator | ++ export INTERACTIVE=false 2025-06-03 16:03:54.603103 | orchestrator | ++ INTERACTIVE=false 2025-06-03 16:03:54.603114 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-06-03 16:03:54.603125 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-06-03 16:03:54.603136 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-06-03 16:03:54.604699 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2025-06-03 16:03:54.610737 | orchestrator | 2025-06-03 16:03:54.610839 | orchestrator | # Ceph status 2025-06-03 16:03:54.610868 | orchestrator | 2025-06-03 16:03:54.610959 | orchestrator | ++ export MANAGER_VERSION=latest 2025-06-03 16:03:54.610986 | orchestrator | ++ MANAGER_VERSION=latest 2025-06-03 16:03:54.611004 | orchestrator | + echo 2025-06-03 16:03:54.611021 | orchestrator | + echo '# Ceph status' 2025-06-03 16:03:54.611038 | orchestrator | + echo 2025-06-03 16:03:54.611056 | orchestrator | + ceph -s 2025-06-03 16:03:55.230773 | orchestrator | cluster: 2025-06-03 16:03:55.230871 | orchestrator | id: 11111111-1111-1111-1111-111111111111 2025-06-03 16:03:55.230944 | orchestrator | health: HEALTH_OK 2025-06-03 16:03:55.230952 | orchestrator | 2025-06-03 16:03:55.230958 | orchestrator | services: 2025-06-03 16:03:55.230963 | orchestrator | mon: 3 daemons, quorum testbed-node-0,testbed-node-1,testbed-node-2 (age 29m) 2025-06-03 16:03:55.230969 | orchestrator | mgr: testbed-node-0(active, since 17m), standbys: testbed-node-1, testbed-node-2 2025-06-03 16:03:55.230974 | orchestrator | mds: 1/1 daemons up, 2 standby 2025-06-03 16:03:55.230978 | orchestrator | osd: 6 osds: 6 up (since 25m), 6 in (since 26m) 2025-06-03 16:03:55.230983 | orchestrator | rgw: 3 daemons active (3 hosts, 1 zones) 2025-06-03 16:03:55.230987 | orchestrator | 2025-06-03 16:03:55.230992 | orchestrator | data: 2025-06-03 16:03:55.230996 | orchestrator | volumes: 1/1 healthy 2025-06-03 16:03:55.231000 | orchestrator | pools: 14 pools, 401 pgs 2025-06-03 16:03:55.231004 | orchestrator | objects: 524 objects, 2.2 GiB 2025-06-03 16:03:55.231008 | orchestrator | usage: 7.1 GiB used, 113 GiB / 120 GiB avail 2025-06-03 16:03:55.231012 | orchestrator | pgs: 401 active+clean 2025-06-03 16:03:55.231016 | orchestrator | 2025-06-03 16:03:55.277343 | orchestrator | 2025-06-03 16:03:55.277416 | orchestrator | # Ceph versions 2025-06-03 16:03:55.277422 | orchestrator | 2025-06-03 16:03:55.277427 | orchestrator | + echo 2025-06-03 16:03:55.277432 | orchestrator | + echo '# Ceph versions' 2025-06-03 16:03:55.277437 | orchestrator | + echo 2025-06-03 16:03:55.277451 | orchestrator | + ceph versions 2025-06-03 16:03:55.867707 | orchestrator | { 2025-06-03 16:03:55.867867 | orchestrator | "mon": { 2025-06-03 16:03:55.867967 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-06-03 16:03:55.868007 | orchestrator | }, 2025-06-03 16:03:55.868026 | orchestrator | "mgr": { 2025-06-03 16:03:55.868042 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-06-03 16:03:55.868060 | orchestrator | }, 2025-06-03 16:03:55.868078 | orchestrator | "osd": { 2025-06-03 16:03:55.868096 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 6 2025-06-03 16:03:55.868115 | orchestrator | }, 2025-06-03 16:03:55.868133 | orchestrator | "mds": { 2025-06-03 16:03:55.868150 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-06-03 16:03:55.868169 | orchestrator | }, 2025-06-03 16:03:55.868186 | orchestrator | "rgw": { 2025-06-03 16:03:55.868205 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-06-03 16:03:55.868224 | orchestrator | }, 2025-06-03 16:03:55.868244 | orchestrator | "overall": { 2025-06-03 16:03:55.868264 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 18 2025-06-03 16:03:55.868279 | orchestrator | } 2025-06-03 16:03:55.868297 | orchestrator | } 2025-06-03 16:03:55.915063 | orchestrator | 2025-06-03 16:03:55.915182 | orchestrator | # Ceph OSD tree 2025-06-03 16:03:55.915200 | orchestrator | 2025-06-03 16:03:55.915209 | orchestrator | + echo 2025-06-03 16:03:55.915218 | orchestrator | + echo '# Ceph OSD tree' 2025-06-03 16:03:55.915227 | orchestrator | + echo 2025-06-03 16:03:55.915235 | orchestrator | + ceph osd df tree 2025-06-03 16:03:56.442363 | orchestrator | ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME 2025-06-03 16:03:56.442497 | orchestrator | -1 0.11691 - 120 GiB 7.1 GiB 6.7 GiB 6 KiB 425 MiB 113 GiB 5.91 1.00 - root default 2025-06-03 16:03:56.442536 | orchestrator | -3 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-3 2025-06-03 16:03:56.442549 | orchestrator | 1 hdd 0.01949 1.00000 20 GiB 928 MiB 859 MiB 1 KiB 70 MiB 19 GiB 4.54 0.77 209 up osd.1 2025-06-03 16:03:56.442560 | orchestrator | 3 hdd 0.01949 1.00000 20 GiB 1.5 GiB 1.4 GiB 1 KiB 74 MiB 19 GiB 7.29 1.23 181 up osd.3 2025-06-03 16:03:56.442571 | orchestrator | -7 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-4 2025-06-03 16:03:56.442583 | orchestrator | 0 hdd 0.01949 1.00000 20 GiB 1.2 GiB 1.1 GiB 1 KiB 74 MiB 19 GiB 5.89 1.00 186 up osd.0 2025-06-03 16:03:56.442593 | orchestrator | 4 hdd 0.01949 1.00000 20 GiB 1.2 GiB 1.1 GiB 1 KiB 70 MiB 19 GiB 5.94 1.01 202 up osd.4 2025-06-03 16:03:56.442604 | orchestrator | -5 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 139 MiB 38 GiB 5.91 1.00 - host testbed-node-5 2025-06-03 16:03:56.442615 | orchestrator | 2 hdd 0.01949 1.00000 20 GiB 1.4 GiB 1.3 GiB 1 KiB 70 MiB 19 GiB 6.77 1.14 206 up osd.2 2025-06-03 16:03:56.442631 | orchestrator | 5 hdd 0.01949 1.00000 20 GiB 1.0 GiB 963 MiB 1 KiB 70 MiB 19 GiB 5.04 0.85 186 up osd.5 2025-06-03 16:03:56.442642 | orchestrator | TOTAL 120 GiB 7.1 GiB 6.7 GiB 9.3 KiB 425 MiB 113 GiB 5.91 2025-06-03 16:03:56.442653 | orchestrator | MIN/MAX VAR: 0.77/1.23 STDDEV: 0.94 2025-06-03 16:03:56.486859 | orchestrator | 2025-06-03 16:03:56.487014 | orchestrator | # Ceph monitor status 2025-06-03 16:03:56.487030 | orchestrator | 2025-06-03 16:03:56.487043 | orchestrator | + echo 2025-06-03 16:03:56.487054 | orchestrator | + echo '# Ceph monitor status' 2025-06-03 16:03:56.487066 | orchestrator | + echo 2025-06-03 16:03:56.487077 | orchestrator | + ceph mon stat 2025-06-03 16:03:57.080454 | orchestrator | e1: 3 mons at {testbed-node-0=[v2:192.168.16.10:3300/0,v1:192.168.16.10:6789/0],testbed-node-1=[v2:192.168.16.11:3300/0,v1:192.168.16.11:6789/0],testbed-node-2=[v2:192.168.16.12:3300/0,v1:192.168.16.12:6789/0]} removed_ranks: {} disallowed_leaders: {}, election epoch 4, leader 0 testbed-node-0, quorum 0,1,2 testbed-node-0,testbed-node-1,testbed-node-2 2025-06-03 16:03:57.124176 | orchestrator | 2025-06-03 16:03:57.124261 | orchestrator | # Ceph quorum status 2025-06-03 16:03:57.124273 | orchestrator | 2025-06-03 16:03:57.124278 | orchestrator | + echo 2025-06-03 16:03:57.124283 | orchestrator | + echo '# Ceph quorum status' 2025-06-03 16:03:57.124287 | orchestrator | + echo 2025-06-03 16:03:57.124968 | orchestrator | + ceph quorum_status 2025-06-03 16:03:57.125005 | orchestrator | + jq 2025-06-03 16:03:57.800850 | orchestrator | { 2025-06-03 16:03:57.801042 | orchestrator | "election_epoch": 4, 2025-06-03 16:03:57.801072 | orchestrator | "quorum": [ 2025-06-03 16:03:57.801092 | orchestrator | 0, 2025-06-03 16:03:57.801113 | orchestrator | 1, 2025-06-03 16:03:57.801132 | orchestrator | 2 2025-06-03 16:03:57.801152 | orchestrator | ], 2025-06-03 16:03:57.801170 | orchestrator | "quorum_names": [ 2025-06-03 16:03:57.801191 | orchestrator | "testbed-node-0", 2025-06-03 16:03:57.801210 | orchestrator | "testbed-node-1", 2025-06-03 16:03:57.801229 | orchestrator | "testbed-node-2" 2025-06-03 16:03:57.801248 | orchestrator | ], 2025-06-03 16:03:57.801266 | orchestrator | "quorum_leader_name": "testbed-node-0", 2025-06-03 16:03:57.801278 | orchestrator | "quorum_age": 1782, 2025-06-03 16:03:57.801289 | orchestrator | "features": { 2025-06-03 16:03:57.801301 | orchestrator | "quorum_con": "4540138322906710015", 2025-06-03 16:03:57.801312 | orchestrator | "quorum_mon": [ 2025-06-03 16:03:57.801323 | orchestrator | "kraken", 2025-06-03 16:03:57.801334 | orchestrator | "luminous", 2025-06-03 16:03:57.801345 | orchestrator | "mimic", 2025-06-03 16:03:57.801356 | orchestrator | "osdmap-prune", 2025-06-03 16:03:57.801367 | orchestrator | "nautilus", 2025-06-03 16:03:57.801380 | orchestrator | "octopus", 2025-06-03 16:03:57.801393 | orchestrator | "pacific", 2025-06-03 16:03:57.801405 | orchestrator | "elector-pinging", 2025-06-03 16:03:57.801417 | orchestrator | "quincy", 2025-06-03 16:03:57.801429 | orchestrator | "reef" 2025-06-03 16:03:57.801442 | orchestrator | ] 2025-06-03 16:03:57.801455 | orchestrator | }, 2025-06-03 16:03:57.801468 | orchestrator | "monmap": { 2025-06-03 16:03:57.801508 | orchestrator | "epoch": 1, 2025-06-03 16:03:57.801522 | orchestrator | "fsid": "11111111-1111-1111-1111-111111111111", 2025-06-03 16:03:57.801542 | orchestrator | "modified": "2025-06-03T15:34:00.998847Z", 2025-06-03 16:03:57.801561 | orchestrator | "created": "2025-06-03T15:34:00.998847Z", 2025-06-03 16:03:57.801580 | orchestrator | "min_mon_release": 18, 2025-06-03 16:03:57.801599 | orchestrator | "min_mon_release_name": "reef", 2025-06-03 16:03:57.801611 | orchestrator | "election_strategy": 1, 2025-06-03 16:03:57.801621 | orchestrator | "disallowed_leaders: ": "", 2025-06-03 16:03:57.801632 | orchestrator | "stretch_mode": false, 2025-06-03 16:03:57.801643 | orchestrator | "tiebreaker_mon": "", 2025-06-03 16:03:57.801654 | orchestrator | "removed_ranks: ": "", 2025-06-03 16:03:57.801664 | orchestrator | "features": { 2025-06-03 16:03:57.801676 | orchestrator | "persistent": [ 2025-06-03 16:03:57.801694 | orchestrator | "kraken", 2025-06-03 16:03:57.801709 | orchestrator | "luminous", 2025-06-03 16:03:57.801737 | orchestrator | "mimic", 2025-06-03 16:03:57.801758 | orchestrator | "osdmap-prune", 2025-06-03 16:03:57.801774 | orchestrator | "nautilus", 2025-06-03 16:03:57.801790 | orchestrator | "octopus", 2025-06-03 16:03:57.801840 | orchestrator | "pacific", 2025-06-03 16:03:57.801858 | orchestrator | "elector-pinging", 2025-06-03 16:03:57.801874 | orchestrator | "quincy", 2025-06-03 16:03:57.801919 | orchestrator | "reef" 2025-06-03 16:03:57.801939 | orchestrator | ], 2025-06-03 16:03:57.801953 | orchestrator | "optional": [] 2025-06-03 16:03:57.801969 | orchestrator | }, 2025-06-03 16:03:57.801985 | orchestrator | "mons": [ 2025-06-03 16:03:57.802001 | orchestrator | { 2025-06-03 16:03:57.802067 | orchestrator | "rank": 0, 2025-06-03 16:03:57.802088 | orchestrator | "name": "testbed-node-0", 2025-06-03 16:03:57.802107 | orchestrator | "public_addrs": { 2025-06-03 16:03:57.802126 | orchestrator | "addrvec": [ 2025-06-03 16:03:57.802144 | orchestrator | { 2025-06-03 16:03:57.802162 | orchestrator | "type": "v2", 2025-06-03 16:03:57.802182 | orchestrator | "addr": "192.168.16.10:3300", 2025-06-03 16:03:57.802202 | orchestrator | "nonce": 0 2025-06-03 16:03:57.802221 | orchestrator | }, 2025-06-03 16:03:57.802236 | orchestrator | { 2025-06-03 16:03:57.802247 | orchestrator | "type": "v1", 2025-06-03 16:03:57.802258 | orchestrator | "addr": "192.168.16.10:6789", 2025-06-03 16:03:57.802269 | orchestrator | "nonce": 0 2025-06-03 16:03:57.802280 | orchestrator | } 2025-06-03 16:03:57.802291 | orchestrator | ] 2025-06-03 16:03:57.802301 | orchestrator | }, 2025-06-03 16:03:57.802312 | orchestrator | "addr": "192.168.16.10:6789/0", 2025-06-03 16:03:57.802323 | orchestrator | "public_addr": "192.168.16.10:6789/0", 2025-06-03 16:03:57.802334 | orchestrator | "priority": 0, 2025-06-03 16:03:57.802345 | orchestrator | "weight": 0, 2025-06-03 16:03:57.802356 | orchestrator | "crush_location": "{}" 2025-06-03 16:03:57.802366 | orchestrator | }, 2025-06-03 16:03:57.802377 | orchestrator | { 2025-06-03 16:03:57.802388 | orchestrator | "rank": 1, 2025-06-03 16:03:57.802400 | orchestrator | "name": "testbed-node-1", 2025-06-03 16:03:57.802411 | orchestrator | "public_addrs": { 2025-06-03 16:03:57.802421 | orchestrator | "addrvec": [ 2025-06-03 16:03:57.802432 | orchestrator | { 2025-06-03 16:03:57.802443 | orchestrator | "type": "v2", 2025-06-03 16:03:57.802454 | orchestrator | "addr": "192.168.16.11:3300", 2025-06-03 16:03:57.802465 | orchestrator | "nonce": 0 2025-06-03 16:03:57.802475 | orchestrator | }, 2025-06-03 16:03:57.802486 | orchestrator | { 2025-06-03 16:03:57.802496 | orchestrator | "type": "v1", 2025-06-03 16:03:57.802507 | orchestrator | "addr": "192.168.16.11:6789", 2025-06-03 16:03:57.802518 | orchestrator | "nonce": 0 2025-06-03 16:03:57.802528 | orchestrator | } 2025-06-03 16:03:57.802539 | orchestrator | ] 2025-06-03 16:03:57.802550 | orchestrator | }, 2025-06-03 16:03:57.802560 | orchestrator | "addr": "192.168.16.11:6789/0", 2025-06-03 16:03:57.802571 | orchestrator | "public_addr": "192.168.16.11:6789/0", 2025-06-03 16:03:57.802582 | orchestrator | "priority": 0, 2025-06-03 16:03:57.802592 | orchestrator | "weight": 0, 2025-06-03 16:03:57.802603 | orchestrator | "crush_location": "{}" 2025-06-03 16:03:57.802614 | orchestrator | }, 2025-06-03 16:03:57.802624 | orchestrator | { 2025-06-03 16:03:57.802635 | orchestrator | "rank": 2, 2025-06-03 16:03:57.802646 | orchestrator | "name": "testbed-node-2", 2025-06-03 16:03:57.802657 | orchestrator | "public_addrs": { 2025-06-03 16:03:57.802667 | orchestrator | "addrvec": [ 2025-06-03 16:03:57.802693 | orchestrator | { 2025-06-03 16:03:57.802704 | orchestrator | "type": "v2", 2025-06-03 16:03:57.802715 | orchestrator | "addr": "192.168.16.12:3300", 2025-06-03 16:03:57.802725 | orchestrator | "nonce": 0 2025-06-03 16:03:57.802736 | orchestrator | }, 2025-06-03 16:03:57.802747 | orchestrator | { 2025-06-03 16:03:57.802757 | orchestrator | "type": "v1", 2025-06-03 16:03:57.802769 | orchestrator | "addr": "192.168.16.12:6789", 2025-06-03 16:03:57.802780 | orchestrator | "nonce": 0 2025-06-03 16:03:57.802790 | orchestrator | } 2025-06-03 16:03:57.802801 | orchestrator | ] 2025-06-03 16:03:57.802811 | orchestrator | }, 2025-06-03 16:03:57.802822 | orchestrator | "addr": "192.168.16.12:6789/0", 2025-06-03 16:03:57.802995 | orchestrator | "public_addr": "192.168.16.12:6789/0", 2025-06-03 16:03:57.803017 | orchestrator | "priority": 0, 2025-06-03 16:03:57.803029 | orchestrator | "weight": 0, 2025-06-03 16:03:57.803040 | orchestrator | "crush_location": "{}" 2025-06-03 16:03:57.803051 | orchestrator | } 2025-06-03 16:03:57.803062 | orchestrator | ] 2025-06-03 16:03:57.803073 | orchestrator | } 2025-06-03 16:03:57.803084 | orchestrator | } 2025-06-03 16:03:57.803111 | orchestrator | 2025-06-03 16:03:57.803123 | orchestrator | # Ceph free space status 2025-06-03 16:03:57.803134 | orchestrator | 2025-06-03 16:03:57.803145 | orchestrator | + echo 2025-06-03 16:03:57.803155 | orchestrator | + echo '# Ceph free space status' 2025-06-03 16:03:57.803167 | orchestrator | + echo 2025-06-03 16:03:57.803178 | orchestrator | + ceph df 2025-06-03 16:03:58.403270 | orchestrator | --- RAW STORAGE --- 2025-06-03 16:03:58.403380 | orchestrator | CLASS SIZE AVAIL USED RAW USED %RAW USED 2025-06-03 16:03:58.403408 | orchestrator | hdd 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.91 2025-06-03 16:03:58.403418 | orchestrator | TOTAL 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.91 2025-06-03 16:03:58.403428 | orchestrator | 2025-06-03 16:03:58.403438 | orchestrator | --- POOLS --- 2025-06-03 16:03:58.403449 | orchestrator | POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL 2025-06-03 16:03:58.403461 | orchestrator | .mgr 1 1 577 KiB 2 1.1 MiB 0 53 GiB 2025-06-03 16:03:58.403471 | orchestrator | cephfs_data 2 32 0 B 0 0 B 0 35 GiB 2025-06-03 16:03:58.403481 | orchestrator | cephfs_metadata 3 16 4.4 KiB 22 96 KiB 0 35 GiB 2025-06-03 16:03:58.403490 | orchestrator | default.rgw.buckets.data 4 32 0 B 0 0 B 0 35 GiB 2025-06-03 16:03:58.403499 | orchestrator | default.rgw.buckets.index 5 32 0 B 0 0 B 0 35 GiB 2025-06-03 16:03:58.403509 | orchestrator | default.rgw.control 6 32 0 B 8 0 B 0 35 GiB 2025-06-03 16:03:58.403519 | orchestrator | default.rgw.log 7 32 3.6 KiB 177 408 KiB 0 35 GiB 2025-06-03 16:03:58.403528 | orchestrator | default.rgw.meta 8 32 0 B 0 0 B 0 35 GiB 2025-06-03 16:03:58.403538 | orchestrator | .rgw.root 9 32 3.9 KiB 8 64 KiB 0 53 GiB 2025-06-03 16:03:58.403548 | orchestrator | backups 10 32 19 B 2 12 KiB 0 35 GiB 2025-06-03 16:03:58.403559 | orchestrator | volumes 11 32 19 B 2 12 KiB 0 35 GiB 2025-06-03 16:03:58.403571 | orchestrator | images 12 32 2.2 GiB 299 6.7 GiB 5.95 35 GiB 2025-06-03 16:03:58.403581 | orchestrator | metrics 13 32 19 B 2 12 KiB 0 35 GiB 2025-06-03 16:03:58.403592 | orchestrator | vms 14 32 19 B 2 12 KiB 0 35 GiB 2025-06-03 16:03:58.448901 | orchestrator | ++ semver latest 5.0.0 2025-06-03 16:03:58.504199 | orchestrator | + [[ -1 -eq -1 ]] 2025-06-03 16:03:58.504317 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-06-03 16:03:58.504341 | orchestrator | + [[ ! -e /etc/redhat-release ]] 2025-06-03 16:03:58.504360 | orchestrator | + osism apply facts 2025-06-03 16:04:00.196152 | orchestrator | Registering Redlock._acquired_script 2025-06-03 16:04:00.196316 | orchestrator | Registering Redlock._extend_script 2025-06-03 16:04:00.196344 | orchestrator | Registering Redlock._release_script 2025-06-03 16:04:00.256378 | orchestrator | 2025-06-03 16:04:00 | INFO  | Task 24ab7ea7-de8e-481d-8a51-99f248113e9b (facts) was prepared for execution. 2025-06-03 16:04:00.256495 | orchestrator | 2025-06-03 16:04:00 | INFO  | It takes a moment until task 24ab7ea7-de8e-481d-8a51-99f248113e9b (facts) has been started and output is visible here. 2025-06-03 16:04:04.322654 | orchestrator | 2025-06-03 16:04:04.322728 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-06-03 16:04:04.324976 | orchestrator | 2025-06-03 16:04:04.325448 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-06-03 16:04:04.326695 | orchestrator | Tuesday 03 June 2025 16:04:04 +0000 (0:00:00.288) 0:00:00.288 ********** 2025-06-03 16:04:05.834660 | orchestrator | ok: [testbed-manager] 2025-06-03 16:04:05.835078 | orchestrator | ok: [testbed-node-0] 2025-06-03 16:04:05.838401 | orchestrator | ok: [testbed-node-1] 2025-06-03 16:04:05.838475 | orchestrator | ok: [testbed-node-2] 2025-06-03 16:04:05.838490 | orchestrator | ok: [testbed-node-3] 2025-06-03 16:04:05.838501 | orchestrator | ok: [testbed-node-4] 2025-06-03 16:04:05.838511 | orchestrator | ok: [testbed-node-5] 2025-06-03 16:04:05.838521 | orchestrator | 2025-06-03 16:04:05.839065 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-06-03 16:04:05.839300 | orchestrator | Tuesday 03 June 2025 16:04:05 +0000 (0:00:01.511) 0:00:01.800 ********** 2025-06-03 16:04:05.996626 | orchestrator | skipping: [testbed-manager] 2025-06-03 16:04:06.077727 | orchestrator | skipping: [testbed-node-0] 2025-06-03 16:04:06.160107 | orchestrator | skipping: [testbed-node-1] 2025-06-03 16:04:06.237323 | orchestrator | skipping: [testbed-node-2] 2025-06-03 16:04:06.316111 | orchestrator | skipping: [testbed-node-3] 2025-06-03 16:04:07.050871 | orchestrator | skipping: [testbed-node-4] 2025-06-03 16:04:07.055533 | orchestrator | skipping: [testbed-node-5] 2025-06-03 16:04:07.058100 | orchestrator | 2025-06-03 16:04:07.059594 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-06-03 16:04:07.062522 | orchestrator | 2025-06-03 16:04:07.065503 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-06-03 16:04:07.066837 | orchestrator | Tuesday 03 June 2025 16:04:07 +0000 (0:00:01.218) 0:00:03.019 ********** 2025-06-03 16:04:12.129023 | orchestrator | ok: [testbed-node-0] 2025-06-03 16:04:12.130115 | orchestrator | ok: [testbed-node-2] 2025-06-03 16:04:12.131045 | orchestrator | ok: [testbed-node-1] 2025-06-03 16:04:12.135019 | orchestrator | ok: [testbed-manager] 2025-06-03 16:04:12.135064 | orchestrator | ok: [testbed-node-4] 2025-06-03 16:04:12.135077 | orchestrator | ok: [testbed-node-3] 2025-06-03 16:04:12.135088 | orchestrator | ok: [testbed-node-5] 2025-06-03 16:04:12.135100 | orchestrator | 2025-06-03 16:04:12.136161 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-06-03 16:04:12.136393 | orchestrator | 2025-06-03 16:04:12.137031 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-06-03 16:04:12.137755 | orchestrator | Tuesday 03 June 2025 16:04:12 +0000 (0:00:05.080) 0:00:08.099 ********** 2025-06-03 16:04:12.293863 | orchestrator | skipping: [testbed-manager] 2025-06-03 16:04:12.372949 | orchestrator | skipping: [testbed-node-0] 2025-06-03 16:04:12.454870 | orchestrator | skipping: [testbed-node-1] 2025-06-03 16:04:12.537043 | orchestrator | skipping: [testbed-node-2] 2025-06-03 16:04:12.617829 | orchestrator | skipping: [testbed-node-3] 2025-06-03 16:04:12.654674 | orchestrator | skipping: [testbed-node-4] 2025-06-03 16:04:12.655769 | orchestrator | skipping: [testbed-node-5] 2025-06-03 16:04:12.656936 | orchestrator | 2025-06-03 16:04:12.656981 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-03 16:04:12.657410 | orchestrator | 2025-06-03 16:04:12 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-03 16:04:12.657530 | orchestrator | 2025-06-03 16:04:12 | INFO  | Please wait and do not abort execution. 2025-06-03 16:04:12.658594 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-03 16:04:12.659517 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-03 16:04:12.659733 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-03 16:04:12.660252 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-03 16:04:12.660555 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-03 16:04:12.661367 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-03 16:04:12.661436 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-03 16:04:12.661804 | orchestrator | 2025-06-03 16:04:12.662166 | orchestrator | 2025-06-03 16:04:12.662573 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-03 16:04:12.663046 | orchestrator | Tuesday 03 June 2025 16:04:12 +0000 (0:00:00.526) 0:00:08.626 ********** 2025-06-03 16:04:12.663615 | orchestrator | =============================================================================== 2025-06-03 16:04:12.663666 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.08s 2025-06-03 16:04:12.664087 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.51s 2025-06-03 16:04:12.664321 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.22s 2025-06-03 16:04:12.664790 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.53s 2025-06-03 16:04:13.283399 | orchestrator | + osism validate ceph-mons 2025-06-03 16:04:14.931177 | orchestrator | Registering Redlock._acquired_script 2025-06-03 16:04:14.931262 | orchestrator | Registering Redlock._extend_script 2025-06-03 16:04:14.931268 | orchestrator | Registering Redlock._release_script 2025-06-03 16:04:34.762370 | orchestrator | 2025-06-03 16:04:34.762460 | orchestrator | PLAY [Ceph validate mons] ****************************************************** 2025-06-03 16:04:34.762470 | orchestrator | 2025-06-03 16:04:34.762478 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-06-03 16:04:34.762485 | orchestrator | Tuesday 03 June 2025 16:04:19 +0000 (0:00:00.490) 0:00:00.490 ********** 2025-06-03 16:04:34.762493 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-03 16:04:34.762499 | orchestrator | 2025-06-03 16:04:34.762506 | orchestrator | TASK [Create report output directory] ****************************************** 2025-06-03 16:04:34.762513 | orchestrator | Tuesday 03 June 2025 16:04:19 +0000 (0:00:00.671) 0:00:01.162 ********** 2025-06-03 16:04:34.762520 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-03 16:04:34.762527 | orchestrator | 2025-06-03 16:04:34.762533 | orchestrator | TASK [Define report vars] ****************************************************** 2025-06-03 16:04:34.762540 | orchestrator | Tuesday 03 June 2025 16:04:20 +0000 (0:00:00.860) 0:00:02.023 ********** 2025-06-03 16:04:34.762548 | orchestrator | ok: [testbed-node-0] 2025-06-03 16:04:34.762556 | orchestrator | 2025-06-03 16:04:34.762563 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2025-06-03 16:04:34.762570 | orchestrator | Tuesday 03 June 2025 16:04:21 +0000 (0:00:00.244) 0:00:02.267 ********** 2025-06-03 16:04:34.762576 | orchestrator | ok: [testbed-node-0] 2025-06-03 16:04:34.762583 | orchestrator | ok: [testbed-node-1] 2025-06-03 16:04:34.762589 | orchestrator | ok: [testbed-node-2] 2025-06-03 16:04:34.762596 | orchestrator | 2025-06-03 16:04:34.762602 | orchestrator | TASK [Get container info] ****************************************************** 2025-06-03 16:04:34.762609 | orchestrator | Tuesday 03 June 2025 16:04:21 +0000 (0:00:00.313) 0:00:02.581 ********** 2025-06-03 16:04:34.762618 | orchestrator | ok: [testbed-node-2] 2025-06-03 16:04:34.762628 | orchestrator | ok: [testbed-node-0] 2025-06-03 16:04:34.762638 | orchestrator | ok: [testbed-node-1] 2025-06-03 16:04:34.762648 | orchestrator | 2025-06-03 16:04:34.762659 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2025-06-03 16:04:34.762687 | orchestrator | Tuesday 03 June 2025 16:04:22 +0000 (0:00:01.031) 0:00:03.613 ********** 2025-06-03 16:04:34.762698 | orchestrator | skipping: [testbed-node-0] 2025-06-03 16:04:34.762709 | orchestrator | skipping: [testbed-node-1] 2025-06-03 16:04:34.762719 | orchestrator | skipping: [testbed-node-2] 2025-06-03 16:04:34.762730 | orchestrator | 2025-06-03 16:04:34.762736 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2025-06-03 16:04:34.762743 | orchestrator | Tuesday 03 June 2025 16:04:22 +0000 (0:00:00.278) 0:00:03.892 ********** 2025-06-03 16:04:34.762749 | orchestrator | ok: [testbed-node-0] 2025-06-03 16:04:34.762758 | orchestrator | ok: [testbed-node-1] 2025-06-03 16:04:34.762771 | orchestrator | ok: [testbed-node-2] 2025-06-03 16:04:34.762785 | orchestrator | 2025-06-03 16:04:34.762797 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-06-03 16:04:34.762806 | orchestrator | Tuesday 03 June 2025 16:04:23 +0000 (0:00:00.538) 0:00:04.430 ********** 2025-06-03 16:04:34.762816 | orchestrator | ok: [testbed-node-0] 2025-06-03 16:04:34.762825 | orchestrator | ok: [testbed-node-1] 2025-06-03 16:04:34.762834 | orchestrator | ok: [testbed-node-2] 2025-06-03 16:04:34.762845 | orchestrator | 2025-06-03 16:04:34.762856 | orchestrator | TASK [Set test result to failed if ceph-mon is not running] ******************** 2025-06-03 16:04:34.762865 | orchestrator | Tuesday 03 June 2025 16:04:23 +0000 (0:00:00.321) 0:00:04.752 ********** 2025-06-03 16:04:34.762876 | orchestrator | skipping: [testbed-node-0] 2025-06-03 16:04:34.762883 | orchestrator | skipping: [testbed-node-1] 2025-06-03 16:04:34.762890 | orchestrator | skipping: [testbed-node-2] 2025-06-03 16:04:34.762896 | orchestrator | 2025-06-03 16:04:34.762902 | orchestrator | TASK [Set test result to passed if ceph-mon is running] ************************ 2025-06-03 16:04:34.762909 | orchestrator | Tuesday 03 June 2025 16:04:23 +0000 (0:00:00.321) 0:00:05.074 ********** 2025-06-03 16:04:34.762982 | orchestrator | ok: [testbed-node-0] 2025-06-03 16:04:34.762992 | orchestrator | ok: [testbed-node-1] 2025-06-03 16:04:34.763000 | orchestrator | ok: [testbed-node-2] 2025-06-03 16:04:34.763010 | orchestrator | 2025-06-03 16:04:34.763018 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-06-03 16:04:34.763027 | orchestrator | Tuesday 03 June 2025 16:04:24 +0000 (0:00:00.305) 0:00:05.379 ********** 2025-06-03 16:04:34.763036 | orchestrator | skipping: [testbed-node-0] 2025-06-03 16:04:34.763045 | orchestrator | 2025-06-03 16:04:34.763053 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-06-03 16:04:34.763062 | orchestrator | Tuesday 03 June 2025 16:04:24 +0000 (0:00:00.689) 0:00:06.069 ********** 2025-06-03 16:04:34.763071 | orchestrator | skipping: [testbed-node-0] 2025-06-03 16:04:34.763079 | orchestrator | 2025-06-03 16:04:34.763088 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-06-03 16:04:34.763097 | orchestrator | Tuesday 03 June 2025 16:04:25 +0000 (0:00:00.254) 0:00:06.324 ********** 2025-06-03 16:04:34.763106 | orchestrator | skipping: [testbed-node-0] 2025-06-03 16:04:34.763114 | orchestrator | 2025-06-03 16:04:34.763123 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-03 16:04:34.763132 | orchestrator | Tuesday 03 June 2025 16:04:25 +0000 (0:00:00.251) 0:00:06.575 ********** 2025-06-03 16:04:34.763140 | orchestrator | 2025-06-03 16:04:34.763148 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-03 16:04:34.763156 | orchestrator | Tuesday 03 June 2025 16:04:25 +0000 (0:00:00.073) 0:00:06.649 ********** 2025-06-03 16:04:34.763165 | orchestrator | 2025-06-03 16:04:34.763174 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-03 16:04:34.763182 | orchestrator | Tuesday 03 June 2025 16:04:25 +0000 (0:00:00.069) 0:00:06.718 ********** 2025-06-03 16:04:34.763191 | orchestrator | 2025-06-03 16:04:34.763200 | orchestrator | TASK [Print report file information] ******************************************* 2025-06-03 16:04:34.763208 | orchestrator | Tuesday 03 June 2025 16:04:25 +0000 (0:00:00.070) 0:00:06.789 ********** 2025-06-03 16:04:34.763225 | orchestrator | skipping: [testbed-node-0] 2025-06-03 16:04:34.763234 | orchestrator | 2025-06-03 16:04:34.763242 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2025-06-03 16:04:34.763252 | orchestrator | Tuesday 03 June 2025 16:04:25 +0000 (0:00:00.229) 0:00:07.019 ********** 2025-06-03 16:04:34.763260 | orchestrator | skipping: [testbed-node-0] 2025-06-03 16:04:34.763268 | orchestrator | 2025-06-03 16:04:34.763290 | orchestrator | TASK [Prepare quorum test vars] ************************************************ 2025-06-03 16:04:34.763298 | orchestrator | Tuesday 03 June 2025 16:04:26 +0000 (0:00:00.233) 0:00:07.253 ********** 2025-06-03 16:04:34.763305 | orchestrator | ok: [testbed-node-0] 2025-06-03 16:04:34.763312 | orchestrator | 2025-06-03 16:04:34.763319 | orchestrator | TASK [Get monmap info from one mon container] ********************************** 2025-06-03 16:04:34.763327 | orchestrator | Tuesday 03 June 2025 16:04:26 +0000 (0:00:00.172) 0:00:07.425 ********** 2025-06-03 16:04:34.763334 | orchestrator | changed: [testbed-node-0] 2025-06-03 16:04:34.763341 | orchestrator | 2025-06-03 16:04:34.763349 | orchestrator | TASK [Set quorum test data] **************************************************** 2025-06-03 16:04:34.763356 | orchestrator | Tuesday 03 June 2025 16:04:27 +0000 (0:00:01.560) 0:00:08.986 ********** 2025-06-03 16:04:34.763363 | orchestrator | ok: [testbed-node-0] 2025-06-03 16:04:34.763371 | orchestrator | 2025-06-03 16:04:34.763378 | orchestrator | TASK [Fail quorum test if not all monitors are in quorum] ********************** 2025-06-03 16:04:34.763385 | orchestrator | Tuesday 03 June 2025 16:04:28 +0000 (0:00:00.325) 0:00:09.311 ********** 2025-06-03 16:04:34.763392 | orchestrator | skipping: [testbed-node-0] 2025-06-03 16:04:34.763399 | orchestrator | 2025-06-03 16:04:34.763407 | orchestrator | TASK [Pass quorum test if all monitors are in quorum] ************************** 2025-06-03 16:04:34.763414 | orchestrator | Tuesday 03 June 2025 16:04:28 +0000 (0:00:00.364) 0:00:09.675 ********** 2025-06-03 16:04:34.763421 | orchestrator | ok: [testbed-node-0] 2025-06-03 16:04:34.763428 | orchestrator | 2025-06-03 16:04:34.763436 | orchestrator | TASK [Set fsid test vars] ****************************************************** 2025-06-03 16:04:34.763443 | orchestrator | Tuesday 03 June 2025 16:04:28 +0000 (0:00:00.325) 0:00:10.001 ********** 2025-06-03 16:04:34.763450 | orchestrator | ok: [testbed-node-0] 2025-06-03 16:04:34.763457 | orchestrator | 2025-06-03 16:04:34.763465 | orchestrator | TASK [Fail Cluster FSID test if FSID does not match configuration] ************* 2025-06-03 16:04:34.763472 | orchestrator | Tuesday 03 June 2025 16:04:29 +0000 (0:00:00.334) 0:00:10.336 ********** 2025-06-03 16:04:34.763479 | orchestrator | skipping: [testbed-node-0] 2025-06-03 16:04:34.763486 | orchestrator | 2025-06-03 16:04:34.763493 | orchestrator | TASK [Pass Cluster FSID test if it matches configuration] ********************** 2025-06-03 16:04:34.763500 | orchestrator | Tuesday 03 June 2025 16:04:29 +0000 (0:00:00.120) 0:00:10.457 ********** 2025-06-03 16:04:34.763508 | orchestrator | ok: [testbed-node-0] 2025-06-03 16:04:34.763515 | orchestrator | 2025-06-03 16:04:34.763532 | orchestrator | TASK [Prepare status test vars] ************************************************ 2025-06-03 16:04:34.763545 | orchestrator | Tuesday 03 June 2025 16:04:29 +0000 (0:00:00.132) 0:00:10.589 ********** 2025-06-03 16:04:34.763563 | orchestrator | ok: [testbed-node-0] 2025-06-03 16:04:34.763577 | orchestrator | 2025-06-03 16:04:34.763588 | orchestrator | TASK [Gather status data] ****************************************************** 2025-06-03 16:04:34.763599 | orchestrator | Tuesday 03 June 2025 16:04:29 +0000 (0:00:00.125) 0:00:10.715 ********** 2025-06-03 16:04:34.763611 | orchestrator | changed: [testbed-node-0] 2025-06-03 16:04:34.763622 | orchestrator | 2025-06-03 16:04:34.763633 | orchestrator | TASK [Set health test data] **************************************************** 2025-06-03 16:04:34.763645 | orchestrator | Tuesday 03 June 2025 16:04:30 +0000 (0:00:01.358) 0:00:12.073 ********** 2025-06-03 16:04:34.763656 | orchestrator | ok: [testbed-node-0] 2025-06-03 16:04:34.763669 | orchestrator | 2025-06-03 16:04:34.763681 | orchestrator | TASK [Fail cluster-health if health is not acceptable] ************************* 2025-06-03 16:04:34.763694 | orchestrator | Tuesday 03 June 2025 16:04:31 +0000 (0:00:00.321) 0:00:12.394 ********** 2025-06-03 16:04:34.763705 | orchestrator | skipping: [testbed-node-0] 2025-06-03 16:04:34.763791 | orchestrator | 2025-06-03 16:04:34.763799 | orchestrator | TASK [Pass cluster-health if health is acceptable] ***************************** 2025-06-03 16:04:34.763807 | orchestrator | Tuesday 03 June 2025 16:04:31 +0000 (0:00:00.134) 0:00:12.529 ********** 2025-06-03 16:04:34.763814 | orchestrator | ok: [testbed-node-0] 2025-06-03 16:04:34.763821 | orchestrator | 2025-06-03 16:04:34.763829 | orchestrator | TASK [Fail cluster-health if health is not acceptable (strict)] **************** 2025-06-03 16:04:34.763836 | orchestrator | Tuesday 03 June 2025 16:04:31 +0000 (0:00:00.146) 0:00:12.675 ********** 2025-06-03 16:04:34.763848 | orchestrator | skipping: [testbed-node-0] 2025-06-03 16:04:34.763856 | orchestrator | 2025-06-03 16:04:34.763863 | orchestrator | TASK [Pass cluster-health if status is OK (strict)] **************************** 2025-06-03 16:04:34.763871 | orchestrator | Tuesday 03 June 2025 16:04:31 +0000 (0:00:00.134) 0:00:12.810 ********** 2025-06-03 16:04:34.763878 | orchestrator | skipping: [testbed-node-0] 2025-06-03 16:04:34.763885 | orchestrator | 2025-06-03 16:04:34.763892 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-06-03 16:04:34.763900 | orchestrator | Tuesday 03 June 2025 16:04:31 +0000 (0:00:00.329) 0:00:13.139 ********** 2025-06-03 16:04:34.763907 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-03 16:04:34.763915 | orchestrator | 2025-06-03 16:04:34.763922 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-06-03 16:04:34.763929 | orchestrator | Tuesday 03 June 2025 16:04:32 +0000 (0:00:00.245) 0:00:13.384 ********** 2025-06-03 16:04:34.763952 | orchestrator | skipping: [testbed-node-0] 2025-06-03 16:04:34.763959 | orchestrator | 2025-06-03 16:04:34.763967 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-06-03 16:04:34.763974 | orchestrator | Tuesday 03 June 2025 16:04:32 +0000 (0:00:00.255) 0:00:13.640 ********** 2025-06-03 16:04:34.763981 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-03 16:04:34.763989 | orchestrator | 2025-06-03 16:04:34.763996 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-06-03 16:04:34.764003 | orchestrator | Tuesday 03 June 2025 16:04:34 +0000 (0:00:01.621) 0:00:15.262 ********** 2025-06-03 16:04:34.764010 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-03 16:04:34.764018 | orchestrator | 2025-06-03 16:04:34.764025 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-06-03 16:04:34.764036 | orchestrator | Tuesday 03 June 2025 16:04:34 +0000 (0:00:00.244) 0:00:15.506 ********** 2025-06-03 16:04:34.764048 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-03 16:04:34.764059 | orchestrator | 2025-06-03 16:04:34.764081 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-03 16:04:36.930768 | orchestrator | Tuesday 03 June 2025 16:04:34 +0000 (0:00:00.255) 0:00:15.761 ********** 2025-06-03 16:04:36.930859 | orchestrator | 2025-06-03 16:04:36.930866 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-03 16:04:36.930872 | orchestrator | Tuesday 03 June 2025 16:04:34 +0000 (0:00:00.079) 0:00:15.840 ********** 2025-06-03 16:04:36.930877 | orchestrator | 2025-06-03 16:04:36.930882 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-03 16:04:36.930887 | orchestrator | Tuesday 03 June 2025 16:04:34 +0000 (0:00:00.068) 0:00:15.908 ********** 2025-06-03 16:04:36.930892 | orchestrator | 2025-06-03 16:04:36.930897 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-06-03 16:04:36.930902 | orchestrator | Tuesday 03 June 2025 16:04:34 +0000 (0:00:00.071) 0:00:15.979 ********** 2025-06-03 16:04:36.930907 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-03 16:04:36.930912 | orchestrator | 2025-06-03 16:04:36.930917 | orchestrator | TASK [Print report file information] ******************************************* 2025-06-03 16:04:36.930922 | orchestrator | Tuesday 03 June 2025 16:04:36 +0000 (0:00:01.279) 0:00:17.259 ********** 2025-06-03 16:04:36.930927 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2025-06-03 16:04:36.931777 | orchestrator |  "msg": [ 2025-06-03 16:04:36.931842 | orchestrator |  "Validator run completed.", 2025-06-03 16:04:36.931858 | orchestrator |  "You can find the report file here:", 2025-06-03 16:04:36.931872 | orchestrator |  "/opt/reports/validator/ceph-mons-validator-2025-06-03T16:04:19+00:00-report.json", 2025-06-03 16:04:36.931892 | orchestrator |  "on the following host:", 2025-06-03 16:04:36.931905 | orchestrator |  "testbed-manager" 2025-06-03 16:04:36.931917 | orchestrator |  ] 2025-06-03 16:04:36.931930 | orchestrator | } 2025-06-03 16:04:36.931986 | orchestrator | 2025-06-03 16:04:36.932000 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-03 16:04:36.932015 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-06-03 16:04:36.932029 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-03 16:04:36.932042 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-03 16:04:36.932054 | orchestrator | 2025-06-03 16:04:36.932067 | orchestrator | 2025-06-03 16:04:36.932079 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-03 16:04:36.932092 | orchestrator | Tuesday 03 June 2025 16:04:36 +0000 (0:00:00.591) 0:00:17.851 ********** 2025-06-03 16:04:36.932105 | orchestrator | =============================================================================== 2025-06-03 16:04:36.932117 | orchestrator | Aggregate test results step one ----------------------------------------- 1.62s 2025-06-03 16:04:36.932130 | orchestrator | Get monmap info from one mon container ---------------------------------- 1.56s 2025-06-03 16:04:36.932143 | orchestrator | Gather status data ------------------------------------------------------ 1.36s 2025-06-03 16:04:36.932156 | orchestrator | Write report file ------------------------------------------------------- 1.28s 2025-06-03 16:04:36.932169 | orchestrator | Get container info ------------------------------------------------------ 1.03s 2025-06-03 16:04:36.932181 | orchestrator | Create report output directory ------------------------------------------ 0.86s 2025-06-03 16:04:36.932193 | orchestrator | Aggregate test results step one ----------------------------------------- 0.69s 2025-06-03 16:04:36.932206 | orchestrator | Get timestamp for report file ------------------------------------------- 0.67s 2025-06-03 16:04:36.932234 | orchestrator | Print report file information ------------------------------------------- 0.59s 2025-06-03 16:04:36.932246 | orchestrator | Set test result to passed if container is existing ---------------------- 0.54s 2025-06-03 16:04:36.932259 | orchestrator | Fail quorum test if not all monitors are in quorum ---------------------- 0.36s 2025-06-03 16:04:36.932272 | orchestrator | Set fsid test vars ------------------------------------------------------ 0.33s 2025-06-03 16:04:36.932285 | orchestrator | Pass cluster-health if status is OK (strict) ---------------------------- 0.33s 2025-06-03 16:04:36.932296 | orchestrator | Pass quorum test if all monitors are in quorum -------------------------- 0.33s 2025-06-03 16:04:36.932308 | orchestrator | Set quorum test data ---------------------------------------------------- 0.33s 2025-06-03 16:04:36.932321 | orchestrator | Set test result to failed if ceph-mon is not running -------------------- 0.32s 2025-06-03 16:04:36.932333 | orchestrator | Prepare test data ------------------------------------------------------- 0.32s 2025-06-03 16:04:36.932345 | orchestrator | Set health test data ---------------------------------------------------- 0.32s 2025-06-03 16:04:36.932357 | orchestrator | Prepare test data for container existance test -------------------------- 0.31s 2025-06-03 16:04:36.932369 | orchestrator | Set test result to passed if ceph-mon is running ------------------------ 0.31s 2025-06-03 16:04:37.261654 | orchestrator | + osism validate ceph-mgrs 2025-06-03 16:04:39.052755 | orchestrator | Registering Redlock._acquired_script 2025-06-03 16:04:39.052838 | orchestrator | Registering Redlock._extend_script 2025-06-03 16:04:39.053076 | orchestrator | Registering Redlock._release_script 2025-06-03 16:04:57.788354 | orchestrator | 2025-06-03 16:04:57.788489 | orchestrator | PLAY [Ceph validate mgrs] ****************************************************** 2025-06-03 16:04:57.788515 | orchestrator | 2025-06-03 16:04:57.788536 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-06-03 16:04:57.788554 | orchestrator | Tuesday 03 June 2025 16:04:43 +0000 (0:00:00.427) 0:00:00.427 ********** 2025-06-03 16:04:57.788573 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-03 16:04:57.788591 | orchestrator | 2025-06-03 16:04:57.788608 | orchestrator | TASK [Create report output directory] ****************************************** 2025-06-03 16:04:57.788625 | orchestrator | Tuesday 03 June 2025 16:04:43 +0000 (0:00:00.639) 0:00:01.067 ********** 2025-06-03 16:04:57.788644 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-03 16:04:57.788662 | orchestrator | 2025-06-03 16:04:57.788679 | orchestrator | TASK [Define report vars] ****************************************************** 2025-06-03 16:04:57.788695 | orchestrator | Tuesday 03 June 2025 16:04:44 +0000 (0:00:00.835) 0:00:01.903 ********** 2025-06-03 16:04:57.788713 | orchestrator | ok: [testbed-node-0] 2025-06-03 16:04:57.788732 | orchestrator | 2025-06-03 16:04:57.788749 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2025-06-03 16:04:57.788766 | orchestrator | Tuesday 03 June 2025 16:04:45 +0000 (0:00:00.239) 0:00:02.142 ********** 2025-06-03 16:04:57.788781 | orchestrator | ok: [testbed-node-0] 2025-06-03 16:04:57.788840 | orchestrator | ok: [testbed-node-1] 2025-06-03 16:04:57.788859 | orchestrator | ok: [testbed-node-2] 2025-06-03 16:04:57.788877 | orchestrator | 2025-06-03 16:04:57.788894 | orchestrator | TASK [Get container info] ****************************************************** 2025-06-03 16:04:57.788911 | orchestrator | Tuesday 03 June 2025 16:04:45 +0000 (0:00:00.301) 0:00:02.444 ********** 2025-06-03 16:04:57.788928 | orchestrator | ok: [testbed-node-2] 2025-06-03 16:04:57.788945 | orchestrator | ok: [testbed-node-0] 2025-06-03 16:04:57.788962 | orchestrator | ok: [testbed-node-1] 2025-06-03 16:04:57.789017 | orchestrator | 2025-06-03 16:04:57.789035 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2025-06-03 16:04:57.789052 | orchestrator | Tuesday 03 June 2025 16:04:46 +0000 (0:00:01.035) 0:00:03.479 ********** 2025-06-03 16:04:57.789071 | orchestrator | skipping: [testbed-node-0] 2025-06-03 16:04:57.789088 | orchestrator | skipping: [testbed-node-1] 2025-06-03 16:04:57.789106 | orchestrator | skipping: [testbed-node-2] 2025-06-03 16:04:57.789123 | orchestrator | 2025-06-03 16:04:57.789140 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2025-06-03 16:04:57.789158 | orchestrator | Tuesday 03 June 2025 16:04:46 +0000 (0:00:00.277) 0:00:03.757 ********** 2025-06-03 16:04:57.789176 | orchestrator | ok: [testbed-node-0] 2025-06-03 16:04:57.789194 | orchestrator | ok: [testbed-node-1] 2025-06-03 16:04:57.789211 | orchestrator | ok: [testbed-node-2] 2025-06-03 16:04:57.789230 | orchestrator | 2025-06-03 16:04:57.789251 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-06-03 16:04:57.789269 | orchestrator | Tuesday 03 June 2025 16:04:47 +0000 (0:00:00.532) 0:00:04.289 ********** 2025-06-03 16:04:57.789288 | orchestrator | ok: [testbed-node-0] 2025-06-03 16:04:57.789300 | orchestrator | ok: [testbed-node-1] 2025-06-03 16:04:57.789311 | orchestrator | ok: [testbed-node-2] 2025-06-03 16:04:57.789322 | orchestrator | 2025-06-03 16:04:57.789333 | orchestrator | TASK [Set test result to failed if ceph-mgr is not running] ******************** 2025-06-03 16:04:57.789344 | orchestrator | Tuesday 03 June 2025 16:04:47 +0000 (0:00:00.316) 0:00:04.606 ********** 2025-06-03 16:04:57.789355 | orchestrator | skipping: [testbed-node-0] 2025-06-03 16:04:57.789366 | orchestrator | skipping: [testbed-node-1] 2025-06-03 16:04:57.789377 | orchestrator | skipping: [testbed-node-2] 2025-06-03 16:04:57.789393 | orchestrator | 2025-06-03 16:04:57.789411 | orchestrator | TASK [Set test result to passed if ceph-mgr is running] ************************ 2025-06-03 16:04:57.789430 | orchestrator | Tuesday 03 June 2025 16:04:47 +0000 (0:00:00.310) 0:00:04.916 ********** 2025-06-03 16:04:57.789449 | orchestrator | ok: [testbed-node-0] 2025-06-03 16:04:57.789508 | orchestrator | ok: [testbed-node-1] 2025-06-03 16:04:57.789530 | orchestrator | ok: [testbed-node-2] 2025-06-03 16:04:57.789550 | orchestrator | 2025-06-03 16:04:57.789569 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-06-03 16:04:57.789588 | orchestrator | Tuesday 03 June 2025 16:04:48 +0000 (0:00:00.310) 0:00:05.227 ********** 2025-06-03 16:04:57.789605 | orchestrator | skipping: [testbed-node-0] 2025-06-03 16:04:57.789624 | orchestrator | 2025-06-03 16:04:57.789642 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-06-03 16:04:57.789661 | orchestrator | Tuesday 03 June 2025 16:04:48 +0000 (0:00:00.677) 0:00:05.905 ********** 2025-06-03 16:04:57.789679 | orchestrator | skipping: [testbed-node-0] 2025-06-03 16:04:57.789697 | orchestrator | 2025-06-03 16:04:57.789716 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-06-03 16:04:57.789735 | orchestrator | Tuesday 03 June 2025 16:04:49 +0000 (0:00:00.248) 0:00:06.153 ********** 2025-06-03 16:04:57.789754 | orchestrator | skipping: [testbed-node-0] 2025-06-03 16:04:57.789772 | orchestrator | 2025-06-03 16:04:57.789791 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-03 16:04:57.789809 | orchestrator | Tuesday 03 June 2025 16:04:49 +0000 (0:00:00.235) 0:00:06.389 ********** 2025-06-03 16:04:57.789826 | orchestrator | 2025-06-03 16:04:57.789838 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-03 16:04:57.789869 | orchestrator | Tuesday 03 June 2025 16:04:49 +0000 (0:00:00.070) 0:00:06.460 ********** 2025-06-03 16:04:57.789880 | orchestrator | 2025-06-03 16:04:57.789892 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-03 16:04:57.789903 | orchestrator | Tuesday 03 June 2025 16:04:49 +0000 (0:00:00.072) 0:00:06.533 ********** 2025-06-03 16:04:57.789914 | orchestrator | 2025-06-03 16:04:57.789925 | orchestrator | TASK [Print report file information] ******************************************* 2025-06-03 16:04:57.789936 | orchestrator | Tuesday 03 June 2025 16:04:49 +0000 (0:00:00.072) 0:00:06.606 ********** 2025-06-03 16:04:57.789947 | orchestrator | skipping: [testbed-node-0] 2025-06-03 16:04:57.789958 | orchestrator | 2025-06-03 16:04:57.790007 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2025-06-03 16:04:57.790110 | orchestrator | Tuesday 03 June 2025 16:04:49 +0000 (0:00:00.246) 0:00:06.852 ********** 2025-06-03 16:04:57.790129 | orchestrator | skipping: [testbed-node-0] 2025-06-03 16:04:57.790146 | orchestrator | 2025-06-03 16:04:57.790198 | orchestrator | TASK [Define mgr module test vars] ********************************************* 2025-06-03 16:04:57.790216 | orchestrator | Tuesday 03 June 2025 16:04:49 +0000 (0:00:00.243) 0:00:07.095 ********** 2025-06-03 16:04:57.790233 | orchestrator | ok: [testbed-node-0] 2025-06-03 16:04:57.790250 | orchestrator | 2025-06-03 16:04:57.790270 | orchestrator | TASK [Gather list of mgr modules] ********************************************** 2025-06-03 16:04:57.790288 | orchestrator | Tuesday 03 June 2025 16:04:50 +0000 (0:00:00.107) 0:00:07.203 ********** 2025-06-03 16:04:57.790308 | orchestrator | changed: [testbed-node-0] 2025-06-03 16:04:57.790327 | orchestrator | 2025-06-03 16:04:57.790346 | orchestrator | TASK [Parse mgr module list from json] ***************************************** 2025-06-03 16:04:57.790366 | orchestrator | Tuesday 03 June 2025 16:04:52 +0000 (0:00:01.953) 0:00:09.157 ********** 2025-06-03 16:04:57.790386 | orchestrator | ok: [testbed-node-0] 2025-06-03 16:04:57.790405 | orchestrator | 2025-06-03 16:04:57.790424 | orchestrator | TASK [Extract list of enabled mgr modules] ************************************* 2025-06-03 16:04:57.790444 | orchestrator | Tuesday 03 June 2025 16:04:52 +0000 (0:00:00.245) 0:00:09.403 ********** 2025-06-03 16:04:57.790464 | orchestrator | ok: [testbed-node-0] 2025-06-03 16:04:57.790483 | orchestrator | 2025-06-03 16:04:57.790502 | orchestrator | TASK [Fail test if mgr modules are disabled that should be enabled] ************ 2025-06-03 16:04:57.790521 | orchestrator | Tuesday 03 June 2025 16:04:52 +0000 (0:00:00.484) 0:00:09.888 ********** 2025-06-03 16:04:57.790540 | orchestrator | skipping: [testbed-node-0] 2025-06-03 16:04:57.790560 | orchestrator | 2025-06-03 16:04:57.790580 | orchestrator | TASK [Pass test if required mgr modules are enabled] *************************** 2025-06-03 16:04:57.790620 | orchestrator | Tuesday 03 June 2025 16:04:52 +0000 (0:00:00.148) 0:00:10.036 ********** 2025-06-03 16:04:57.790639 | orchestrator | ok: [testbed-node-0] 2025-06-03 16:04:57.790664 | orchestrator | 2025-06-03 16:04:57.790684 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-06-03 16:04:57.790703 | orchestrator | Tuesday 03 June 2025 16:04:53 +0000 (0:00:00.175) 0:00:10.211 ********** 2025-06-03 16:04:57.790723 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-03 16:04:57.790742 | orchestrator | 2025-06-03 16:04:57.790762 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-06-03 16:04:57.790783 | orchestrator | Tuesday 03 June 2025 16:04:53 +0000 (0:00:00.256) 0:00:10.468 ********** 2025-06-03 16:04:57.790801 | orchestrator | skipping: [testbed-node-0] 2025-06-03 16:04:57.790820 | orchestrator | 2025-06-03 16:04:57.790840 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-06-03 16:04:57.790859 | orchestrator | Tuesday 03 June 2025 16:04:53 +0000 (0:00:00.239) 0:00:10.707 ********** 2025-06-03 16:04:57.790878 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-03 16:04:57.790898 | orchestrator | 2025-06-03 16:04:57.790919 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-06-03 16:04:57.790939 | orchestrator | Tuesday 03 June 2025 16:04:54 +0000 (0:00:01.256) 0:00:11.964 ********** 2025-06-03 16:04:57.790959 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-03 16:04:57.791146 | orchestrator | 2025-06-03 16:04:57.791170 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-06-03 16:04:57.791190 | orchestrator | Tuesday 03 June 2025 16:04:55 +0000 (0:00:00.259) 0:00:12.223 ********** 2025-06-03 16:04:57.791207 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-03 16:04:57.791225 | orchestrator | 2025-06-03 16:04:57.791244 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-03 16:04:57.791263 | orchestrator | Tuesday 03 June 2025 16:04:55 +0000 (0:00:00.299) 0:00:12.522 ********** 2025-06-03 16:04:57.791281 | orchestrator | 2025-06-03 16:04:57.791298 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-03 16:04:57.791317 | orchestrator | Tuesday 03 June 2025 16:04:55 +0000 (0:00:00.070) 0:00:12.592 ********** 2025-06-03 16:04:57.791335 | orchestrator | 2025-06-03 16:04:57.791352 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-03 16:04:57.791369 | orchestrator | Tuesday 03 June 2025 16:04:55 +0000 (0:00:00.071) 0:00:12.664 ********** 2025-06-03 16:04:57.791386 | orchestrator | 2025-06-03 16:04:57.791402 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-06-03 16:04:57.791419 | orchestrator | Tuesday 03 June 2025 16:04:55 +0000 (0:00:00.078) 0:00:12.743 ********** 2025-06-03 16:04:57.791436 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-03 16:04:57.791454 | orchestrator | 2025-06-03 16:04:57.791487 | orchestrator | TASK [Print report file information] ******************************************* 2025-06-03 16:04:57.791508 | orchestrator | Tuesday 03 June 2025 16:04:57 +0000 (0:00:01.735) 0:00:14.478 ********** 2025-06-03 16:04:57.791524 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2025-06-03 16:04:57.791540 | orchestrator |  "msg": [ 2025-06-03 16:04:57.791558 | orchestrator |  "Validator run completed.", 2025-06-03 16:04:57.791575 | orchestrator |  "You can find the report file here:", 2025-06-03 16:04:57.791592 | orchestrator |  "/opt/reports/validator/ceph-mgrs-validator-2025-06-03T16:04:43+00:00-report.json", 2025-06-03 16:04:57.791610 | orchestrator |  "on the following host:", 2025-06-03 16:04:57.791628 | orchestrator |  "testbed-manager" 2025-06-03 16:04:57.791644 | orchestrator |  ] 2025-06-03 16:04:57.791661 | orchestrator | } 2025-06-03 16:04:57.791676 | orchestrator | 2025-06-03 16:04:57.791687 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-03 16:04:57.791712 | orchestrator | testbed-node-0 : ok=19  changed=3  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-06-03 16:04:57.791724 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-03 16:04:57.791754 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-03 16:04:58.136575 | orchestrator | 2025-06-03 16:04:58.136663 | orchestrator | 2025-06-03 16:04:58.136674 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-03 16:04:58.136681 | orchestrator | Tuesday 03 June 2025 16:04:57 +0000 (0:00:00.389) 0:00:14.868 ********** 2025-06-03 16:04:58.136687 | orchestrator | =============================================================================== 2025-06-03 16:04:58.136692 | orchestrator | Gather list of mgr modules ---------------------------------------------- 1.95s 2025-06-03 16:04:58.136698 | orchestrator | Write report file ------------------------------------------------------- 1.74s 2025-06-03 16:04:58.136703 | orchestrator | Aggregate test results step one ----------------------------------------- 1.26s 2025-06-03 16:04:58.136711 | orchestrator | Get container info ------------------------------------------------------ 1.04s 2025-06-03 16:04:58.136720 | orchestrator | Create report output directory ------------------------------------------ 0.84s 2025-06-03 16:04:58.136728 | orchestrator | Aggregate test results step one ----------------------------------------- 0.68s 2025-06-03 16:04:58.136736 | orchestrator | Get timestamp for report file ------------------------------------------- 0.64s 2025-06-03 16:04:58.136744 | orchestrator | Set test result to passed if container is existing ---------------------- 0.53s 2025-06-03 16:04:58.136752 | orchestrator | Extract list of enabled mgr modules ------------------------------------- 0.48s 2025-06-03 16:04:58.136762 | orchestrator | Print report file information ------------------------------------------- 0.39s 2025-06-03 16:04:58.136772 | orchestrator | Prepare test data ------------------------------------------------------- 0.32s 2025-06-03 16:04:58.136781 | orchestrator | Set test result to passed if ceph-mgr is running ------------------------ 0.31s 2025-06-03 16:04:58.136792 | orchestrator | Set test result to failed if ceph-mgr is not running -------------------- 0.31s 2025-06-03 16:04:58.136799 | orchestrator | Prepare test data for container existance test -------------------------- 0.30s 2025-06-03 16:04:58.136805 | orchestrator | Aggregate test results step three --------------------------------------- 0.30s 2025-06-03 16:04:58.136811 | orchestrator | Set test result to failed if container is missing ----------------------- 0.28s 2025-06-03 16:04:58.136817 | orchestrator | Aggregate test results step two ----------------------------------------- 0.26s 2025-06-03 16:04:58.136823 | orchestrator | Set validation result to passed if no test failed ----------------------- 0.26s 2025-06-03 16:04:58.136829 | orchestrator | Aggregate test results step two ----------------------------------------- 0.25s 2025-06-03 16:04:58.136835 | orchestrator | Print report file information ------------------------------------------- 0.25s 2025-06-03 16:04:58.395078 | orchestrator | + osism validate ceph-osds 2025-06-03 16:05:00.114793 | orchestrator | Registering Redlock._acquired_script 2025-06-03 16:05:00.114935 | orchestrator | Registering Redlock._extend_script 2025-06-03 16:05:00.114951 | orchestrator | Registering Redlock._release_script 2025-06-03 16:05:09.137738 | orchestrator | 2025-06-03 16:05:09.137827 | orchestrator | PLAY [Ceph validate OSDs] ****************************************************** 2025-06-03 16:05:09.137839 | orchestrator | 2025-06-03 16:05:09.137851 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-06-03 16:05:09.137863 | orchestrator | Tuesday 03 June 2025 16:05:04 +0000 (0:00:00.456) 0:00:00.456 ********** 2025-06-03 16:05:09.137881 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-03 16:05:09.137894 | orchestrator | 2025-06-03 16:05:09.137904 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-06-03 16:05:09.137914 | orchestrator | Tuesday 03 June 2025 16:05:05 +0000 (0:00:00.630) 0:00:01.086 ********** 2025-06-03 16:05:09.137946 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-03 16:05:09.137957 | orchestrator | 2025-06-03 16:05:09.137966 | orchestrator | TASK [Create report output directory] ****************************************** 2025-06-03 16:05:09.137976 | orchestrator | Tuesday 03 June 2025 16:05:05 +0000 (0:00:00.427) 0:00:01.514 ********** 2025-06-03 16:05:09.138094 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-03 16:05:09.138107 | orchestrator | 2025-06-03 16:05:09.138114 | orchestrator | TASK [Define report vars] ****************************************************** 2025-06-03 16:05:09.138121 | orchestrator | Tuesday 03 June 2025 16:05:06 +0000 (0:00:01.022) 0:00:02.537 ********** 2025-06-03 16:05:09.138128 | orchestrator | ok: [testbed-node-3] 2025-06-03 16:05:09.138163 | orchestrator | 2025-06-03 16:05:09.138171 | orchestrator | TASK [Define OSD test variables] *********************************************** 2025-06-03 16:05:09.138177 | orchestrator | Tuesday 03 June 2025 16:05:06 +0000 (0:00:00.120) 0:00:02.657 ********** 2025-06-03 16:05:09.138185 | orchestrator | skipping: [testbed-node-3] 2025-06-03 16:05:09.138192 | orchestrator | 2025-06-03 16:05:09.138199 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2025-06-03 16:05:09.138206 | orchestrator | Tuesday 03 June 2025 16:05:06 +0000 (0:00:00.129) 0:00:02.786 ********** 2025-06-03 16:05:09.138213 | orchestrator | skipping: [testbed-node-3] 2025-06-03 16:05:09.138219 | orchestrator | skipping: [testbed-node-4] 2025-06-03 16:05:09.138226 | orchestrator | skipping: [testbed-node-5] 2025-06-03 16:05:09.138233 | orchestrator | 2025-06-03 16:05:09.138239 | orchestrator | TASK [Define OSD test variables] *********************************************** 2025-06-03 16:05:09.138246 | orchestrator | Tuesday 03 June 2025 16:05:07 +0000 (0:00:00.293) 0:00:03.079 ********** 2025-06-03 16:05:09.138253 | orchestrator | ok: [testbed-node-3] 2025-06-03 16:05:09.138259 | orchestrator | 2025-06-03 16:05:09.138266 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2025-06-03 16:05:09.138272 | orchestrator | Tuesday 03 June 2025 16:05:07 +0000 (0:00:00.153) 0:00:03.232 ********** 2025-06-03 16:05:09.138279 | orchestrator | ok: [testbed-node-3] 2025-06-03 16:05:09.138286 | orchestrator | ok: [testbed-node-4] 2025-06-03 16:05:09.138292 | orchestrator | ok: [testbed-node-5] 2025-06-03 16:05:09.138299 | orchestrator | 2025-06-03 16:05:09.138306 | orchestrator | TASK [Calculate total number of OSDs in cluster] ******************************* 2025-06-03 16:05:09.138312 | orchestrator | Tuesday 03 June 2025 16:05:07 +0000 (0:00:00.331) 0:00:03.564 ********** 2025-06-03 16:05:09.138319 | orchestrator | ok: [testbed-node-3] 2025-06-03 16:05:09.138325 | orchestrator | 2025-06-03 16:05:09.138332 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-06-03 16:05:09.138339 | orchestrator | Tuesday 03 June 2025 16:05:08 +0000 (0:00:00.660) 0:00:04.225 ********** 2025-06-03 16:05:09.138345 | orchestrator | ok: [testbed-node-3] 2025-06-03 16:05:09.138352 | orchestrator | ok: [testbed-node-4] 2025-06-03 16:05:09.138359 | orchestrator | ok: [testbed-node-5] 2025-06-03 16:05:09.138366 | orchestrator | 2025-06-03 16:05:09.138372 | orchestrator | TASK [Get list of ceph-osd containers on host] ********************************* 2025-06-03 16:05:09.138379 | orchestrator | Tuesday 03 June 2025 16:05:08 +0000 (0:00:00.597) 0:00:04.822 ********** 2025-06-03 16:05:09.138388 | orchestrator | skipping: [testbed-node-3] => (item={'id': '3fcfe881f3c4851906fd7f3abc92a0818c27aaafceb954420cfd87b288b22ad6', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 5 minutes (healthy)'})  2025-06-03 16:05:09.138397 | orchestrator | skipping: [testbed-node-3] => (item={'id': '34735e837496ff0147954ab6b6b26fb914dbf17674b97f738e6deee339845f4b', 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 6 minutes (healthy)'})  2025-06-03 16:05:09.138406 | orchestrator | skipping: [testbed-node-3] => (item={'id': '9c3a4fa0a24e69fea5a1373ef60c86b0cc4c0c4f98672aec1d5927b6649cca92', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 6 minutes (healthy)'})  2025-06-03 16:05:09.138424 | orchestrator | skipping: [testbed-node-3] => (item={'id': '61dab3dc0da8758f07d568299c9cec09b404234686713d3bedcc4f53792fee9b', 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-06-03 16:05:09.138431 | orchestrator | skipping: [testbed-node-3] => (item={'id': '6a55c10cb36eaf31c5e4da1940220c81bbda2369411b7a91d467794fa660578e', 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 11 minutes (healthy)'})  2025-06-03 16:05:09.138463 | orchestrator | skipping: [testbed-node-3] => (item={'id': '825d569485b0a6ed8ea010e57e57cfd55854c2a6967454bee07276b60830c54e', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 13 minutes'})  2025-06-03 16:05:09.138482 | orchestrator | skipping: [testbed-node-3] => (item={'id': '0b70bf6e52fbf00c6b226108c63e80b23fb636c8fe10c8ba88c2e7529968ea36', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 13 minutes'})  2025-06-03 16:05:09.138490 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'f7b39dc1c3f57d81ed0b3ad31daaf66d9783d22a7073c5766680bc635bd743a6', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 14 minutes'})  2025-06-03 16:05:09.138497 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'bc580554e2c132727af382e7274b15f78821748bac5f59a8fceea3103e8ed107', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 14 minutes (healthy)'})  2025-06-03 16:05:09.138508 | orchestrator | skipping: [testbed-node-3] => (item={'id': '0ccd074faa85275d7be7a41f24bfbde25e3967e730240f8112497e12e7b79aed', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-3-rgw0', 'state': 'running', 'status': 'Up 23 minutes'})  2025-06-03 16:05:09.138515 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'b0040cce3c203c352c1e2c1782e2c73f553d4c26c4423b879db3039c84b3d417', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-3', 'state': 'running', 'status': 'Up 24 minutes'})  2025-06-03 16:05:09.138522 | orchestrator | skipping: [testbed-node-3] => (item={'id': '5d0bdfd2e7a6140fae10323093b02a1c80429dcf8a7f70fb5573d74c6183f08a', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-3', 'state': 'running', 'status': 'Up 25 minutes'})  2025-06-03 16:05:09.138529 | orchestrator | ok: [testbed-node-3] => (item={'id': 'e01ab257555e3c80899bdb60d7c1712b43af53af2c90ce8eb13d55062ae10c5a', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-1', 'state': 'running', 'status': 'Up 26 minutes'}) 2025-06-03 16:05:09.138537 | orchestrator | ok: [testbed-node-3] => (item={'id': '72ca8c9607e4f7933d8e1dec7e2fccdd09d1f637e33f901138db4fd94e862a09', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-3', 'state': 'running', 'status': 'Up 26 minutes'}) 2025-06-03 16:05:09.138544 | orchestrator | skipping: [testbed-node-3] => (item={'id': '0ec0d1a55983b5cb9b7ff7ae728f1e015d3dd75d463eb26e3129735e0ec37aab', 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 29 minutes'})  2025-06-03 16:05:09.138551 | orchestrator | skipping: [testbed-node-3] => (item={'id': '8abcc6475452c0a69130520cbc25b9eb78db6ef62dc7db54616a8a436537da0b', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 30 minutes (healthy)'})  2025-06-03 16:05:09.138558 | orchestrator | skipping: [testbed-node-3] => (item={'id': '3f030eae7e1a8ee9eb6f4900591fa4c7e61672c704dd28a3d7331aeeb9099c8c', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 31 minutes (healthy)'})  2025-06-03 16:05:09.138572 | orchestrator | skipping: [testbed-node-3] => (item={'id': '1e86b65f605d3ec267a5bb0ccded312daa3d0d6e004b29ef8862206f3e01d0e6', 'image': 'registry.osism.tech/kolla/cron:2024.2', 'name': '/cron', 'state': 'running', 'status': 'Up 31 minutes'})  2025-06-03 16:05:09.138578 | orchestrator | skipping: [testbed-node-3] => (item={'id': '1431475bd3d1812d00962a5ec4edfe41a47972bf5fc66f5e80a588e5dc01d810', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 32 minutes'})  2025-06-03 16:05:09.138586 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'db7627d5e703037559b246f19979fcb74ab6cf9865f685fae39c3ba3c73fc524', 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'name': '/fluentd', 'state': 'running', 'status': 'Up 32 minutes'})  2025-06-03 16:05:09.138593 | orchestrator | skipping: [testbed-node-4] => (item={'id': '840b0b33a7fa39b47c9ee26d369268a62af9986bbb436b5e978fc460b0794a7d', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 5 minutes (healthy)'})  2025-06-03 16:05:09.138604 | orchestrator | skipping: [testbed-node-4] => (item={'id': '3a511f92b0a73d17b2cabbf63516a17dc4c2b1430f3f69889f62a6b70321509f', 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 6 minutes (healthy)'})  2025-06-03 16:05:09.464117 | orchestrator | skipping: [testbed-node-4] => (item={'id': '28d48284ea56622f2ed846727a52cd1a83d993be863a127f4c59ebb79e629021', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 6 minutes (healthy)'})  2025-06-03 16:05:09.464196 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'aa0c1fda83c22fe757b38419c4a1b54f653f80e6819edef535ce870867c989af', 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-06-03 16:05:09.464205 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'b7b649a36b5daf0b75bf7baabef5dbcbb30003e132df2387717dfcdf72b0b9b5', 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 11 minutes (healthy)'})  2025-06-03 16:05:09.464223 | orchestrator | skipping: [testbed-node-4] => (item={'id': '30809730bad632dd48640ecdebff62b910042f424e070ca00d25628481a56ce2', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 13 minutes'})  2025-06-03 16:05:09.464230 | orchestrator | skipping: [testbed-node-4] => (item={'id': '8fd4d171d2dfeff259c40e88c23171bcc094ca63a4651890df75144bb15b2be2', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 13 minutes'})  2025-06-03 16:05:09.464234 | orchestrator | skipping: [testbed-node-4] => (item={'id': '510981def209a1d15cb168632fa71bca6c0c06927c63faf7759e90d04aadd1ce', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 14 minutes'})  2025-06-03 16:05:09.464239 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'eb8e4a2a57033e7a713abd7c986c06b53e898f210ba345f2a21edb426eacc3f5', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 14 minutes (healthy)'})  2025-06-03 16:05:09.464243 | orchestrator | skipping: [testbed-node-4] => (item={'id': '6fed4440ca51e1c73251ae80113226e563242a42e6c78e751ddc7a363d19d5c6', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-4-rgw0', 'state': 'running', 'status': 'Up 23 minutes'})  2025-06-03 16:05:09.464248 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'c8925d81e997f077aa84e19821157a012950e8aa0d263c1bdf138df82896b1c9', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-4', 'state': 'running', 'status': 'Up 24 minutes'})  2025-06-03 16:05:09.464266 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'a63c44dfd1b2d799033edae46e91ec5a45cb1603f5ae243add027c2195a7c2a6', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-4', 'state': 'running', 'status': 'Up 25 minutes'})  2025-06-03 16:05:09.464271 | orchestrator | ok: [testbed-node-4] => (item={'id': '9391123f8d3b17ea3cbda2217cb42dd42fc6cbf4a9b47a549de23dcb452c23b7', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-4', 'state': 'running', 'status': 'Up 26 minutes'}) 2025-06-03 16:05:09.464276 | orchestrator | ok: [testbed-node-4] => (item={'id': 'c4453730864b9557bc92d7f94f5d3dca9cf0ea2cd642ff4b55c603e97dbc178d', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-0', 'state': 'running', 'status': 'Up 26 minutes'}) 2025-06-03 16:05:09.464280 | orchestrator | skipping: [testbed-node-4] => (item={'id': '78ea66848554fa7b1c4e79f859cde9333697554bac73e79bbc76506dda019e7c', 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 29 minutes'})  2025-06-03 16:05:09.464285 | orchestrator | skipping: [testbed-node-4] => (item={'id': '9db385918c14289f8053b668e94619833e4245a201cd21365e123b0060f62abe', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 30 minutes (healthy)'})  2025-06-03 16:05:09.464290 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'dc44526973ae8edf37f627a5a63274bf6ea988e105d0bf551a91a20ca91153b9', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 31 minutes (healthy)'})  2025-06-03 16:05:09.464305 | orchestrator | skipping: [testbed-node-4] => (item={'id': '1648c005cc7042bab8f147fe5219d241ec1173e3648183637b2fa31ddc6d0202', 'image': 'registry.osism.tech/kolla/cron:2024.2', 'name': '/cron', 'state': 'running', 'status': 'Up 31 minutes'})  2025-06-03 16:05:09.464310 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'b6c5d8c6612b9f2799e82f9fcbcdda4e97f83b1ad01d563e45f930f5273d292a', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 32 minutes'})  2025-06-03 16:05:09.464314 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'a2821913b4537d74c14c0ffd57c50aba575270d3d18316452c3d839c673c7e55', 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'name': '/fluentd', 'state': 'running', 'status': 'Up 32 minutes'})  2025-06-03 16:05:09.464319 | orchestrator | skipping: [testbed-node-5] => (item={'id': '8f9f2f0c63ec458a36471991555909ff991223ac0a722a1a4d753b608967cf4d', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 5 minutes (healthy)'})  2025-06-03 16:05:09.464324 | orchestrator | skipping: [testbed-node-5] => (item={'id': '1059139aea0ca192d58e557ad8cbec2a9f75e608437c294d175a0bc6435991ee', 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 6 minutes (healthy)'})  2025-06-03 16:05:09.464328 | orchestrator | skipping: [testbed-node-5] => (item={'id': '185e99827726db0b692f514eaa88cf44678bfeaec179cdf9f632e337b5cba3ce', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 6 minutes (healthy)'})  2025-06-03 16:05:09.464333 | orchestrator | skipping: [testbed-node-5] => (item={'id': '65e21c0f82bc15abe87b410306042d7a22e5e94884263d4e1c1c7bd44c8e4799', 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-06-03 16:05:09.464337 | orchestrator | skipping: [testbed-node-5] => (item={'id': '4354322f3c104378c30ed28c675b5709d965994196086c63287af2db9789e32b', 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-06-03 16:05:09.464345 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'ac89a6a87bd87a9176f1688448a557ea04b81a8953be229b19f8b9b1768aace1', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 13 minutes'})  2025-06-03 16:05:09.464350 | orchestrator | skipping: [testbed-node-5] => (item={'id': '945b4713043c6c37cc0c8590874747241258706dc278c265f9a431e0bc551b7a', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 13 minutes'})  2025-06-03 16:05:09.464354 | orchestrator | skipping: [testbed-node-5] => (item={'id': '12375c7a7ee346a2eda0dab775505e26654079d759c07ea39c14fae93984de3c', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 14 minutes'})  2025-06-03 16:05:09.464358 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'c3babbbc8e9da5a8d3164b901300ded2797179f12addd30547f0ca4199bec472', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 14 minutes (healthy)'})  2025-06-03 16:05:09.464363 | orchestrator | skipping: [testbed-node-5] => (item={'id': '62ad2b6fa4b585d8fbb1943bc85b95a171e73b35128e5d8fcd0f1bb414571308', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-5-rgw0', 'state': 'running', 'status': 'Up 23 minutes'})  2025-06-03 16:05:09.464367 | orchestrator | skipping: [testbed-node-5] => (item={'id': '901db395f87d5d1da3fc754f1317d0fe02aeaa1b6c7653509edcb49ad24afb48', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-5', 'state': 'running', 'status': 'Up 24 minutes'})  2025-06-03 16:05:09.464371 | orchestrator | skipping: [testbed-node-5] => (item={'id': '3816cd6bc29ec37063054cef32e35fd087321871412bf935fc715bf20d3df29b', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-5', 'state': 'running', 'status': 'Up 25 minutes'})  2025-06-03 16:05:09.464379 | orchestrator | ok: [testbed-node-5] => (item={'id': 'b493d40d9804f913c3a3a56cb6c97acf118c8ab426bb396e3597b62cace0ecbc', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-2', 'state': 'running', 'status': 'Up 26 minutes'}) 2025-06-03 16:05:18.251401 | orchestrator | ok: [testbed-node-5] => (item={'id': '9d2a01d418c56f71062c6580b27d7c3376299886054dded27e65bee83e932e29', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-5', 'state': 'running', 'status': 'Up 26 minutes'}) 2025-06-03 16:05:18.251525 | orchestrator | skipping: [testbed-node-5] => (item={'id': '654f2a0626ea8eaff7a445396b49b625f023e36c50673c36c41babdf782a38f3', 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 29 minutes'})  2025-06-03 16:05:18.251537 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'af8a8cd811c32313a60f1395eec65d8d8764e693fcd3f9ba186f73e6fe73713f', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 30 minutes (healthy)'})  2025-06-03 16:05:18.251552 | orchestrator | skipping: [testbed-node-5] => (item={'id': '4a8c2e683ce4f7818b1ebf5d142b0a246defed384d784f3d84f518d0b0258f1d', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 31 minutes (healthy)'})  2025-06-03 16:05:18.251560 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'ea2b9d8ee3454847f00c6607ce5e37d9339d755deb68a15a8d76e1da6593f08e', 'image': 'registry.osism.tech/kolla/cron:2024.2', 'name': '/cron', 'state': 'running', 'status': 'Up 31 minutes'})  2025-06-03 16:05:18.251566 | orchestrator | skipping: [testbed-node-5] => (item={'id': '0a8fb2d39b3f65607ad416f4d34da54e1b115a2ffe2cff8eede4b1ac5608bc77', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 32 minutes'})  2025-06-03 16:05:18.251597 | orchestrator | skipping: [testbed-node-5] => (item={'id': '53472fc8b3e128a73edea4010a6ad55c4d7a3b3c1c598310c6aceaa217f2aa47', 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'name': '/fluentd', 'state': 'running', 'status': 'Up 32 minutes'})  2025-06-03 16:05:18.251604 | orchestrator | 2025-06-03 16:05:18.251613 | orchestrator | TASK [Get count of ceph-osd containers on host] ******************************** 2025-06-03 16:05:18.251622 | orchestrator | Tuesday 03 June 2025 16:05:09 +0000 (0:00:00.610) 0:00:05.433 ********** 2025-06-03 16:05:18.251627 | orchestrator | ok: [testbed-node-3] 2025-06-03 16:05:18.251635 | orchestrator | ok: [testbed-node-4] 2025-06-03 16:05:18.251641 | orchestrator | ok: [testbed-node-5] 2025-06-03 16:05:18.251647 | orchestrator | 2025-06-03 16:05:18.251653 | orchestrator | TASK [Set test result to failed when count of containers is wrong] ************* 2025-06-03 16:05:18.251659 | orchestrator | Tuesday 03 June 2025 16:05:09 +0000 (0:00:00.340) 0:00:05.774 ********** 2025-06-03 16:05:18.251665 | orchestrator | skipping: [testbed-node-3] 2025-06-03 16:05:18.251672 | orchestrator | skipping: [testbed-node-4] 2025-06-03 16:05:18.251678 | orchestrator | skipping: [testbed-node-5] 2025-06-03 16:05:18.251684 | orchestrator | 2025-06-03 16:05:18.251690 | orchestrator | TASK [Set test result to passed if count matches] ****************************** 2025-06-03 16:05:18.251695 | orchestrator | Tuesday 03 June 2025 16:05:10 +0000 (0:00:00.509) 0:00:06.283 ********** 2025-06-03 16:05:18.251701 | orchestrator | ok: [testbed-node-3] 2025-06-03 16:05:18.251707 | orchestrator | ok: [testbed-node-4] 2025-06-03 16:05:18.251713 | orchestrator | ok: [testbed-node-5] 2025-06-03 16:05:18.251720 | orchestrator | 2025-06-03 16:05:18.251726 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-06-03 16:05:18.251732 | orchestrator | Tuesday 03 June 2025 16:05:10 +0000 (0:00:00.335) 0:00:06.618 ********** 2025-06-03 16:05:18.251738 | orchestrator | ok: [testbed-node-3] 2025-06-03 16:05:18.251744 | orchestrator | ok: [testbed-node-4] 2025-06-03 16:05:18.251750 | orchestrator | ok: [testbed-node-5] 2025-06-03 16:05:18.251756 | orchestrator | 2025-06-03 16:05:18.251763 | orchestrator | TASK [Get list of ceph-osd containers that are not running] ******************** 2025-06-03 16:05:18.251769 | orchestrator | Tuesday 03 June 2025 16:05:10 +0000 (0:00:00.322) 0:00:06.941 ********** 2025-06-03 16:05:18.251775 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-1', 'osd_id': '1', 'state': 'running'})  2025-06-03 16:05:18.251783 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-3', 'osd_id': '3', 'state': 'running'})  2025-06-03 16:05:18.251789 | orchestrator | skipping: [testbed-node-3] 2025-06-03 16:05:18.251795 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-4', 'osd_id': '4', 'state': 'running'})  2025-06-03 16:05:18.251801 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-0', 'osd_id': '0', 'state': 'running'})  2025-06-03 16:05:18.251807 | orchestrator | skipping: [testbed-node-4] 2025-06-03 16:05:18.251813 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-2', 'osd_id': '2', 'state': 'running'})  2025-06-03 16:05:18.251819 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-5', 'osd_id': '5', 'state': 'running'})  2025-06-03 16:05:18.251825 | orchestrator | skipping: [testbed-node-5] 2025-06-03 16:05:18.251832 | orchestrator | 2025-06-03 16:05:18.251838 | orchestrator | TASK [Get count of ceph-osd containers that are not running] ******************* 2025-06-03 16:05:18.251844 | orchestrator | Tuesday 03 June 2025 16:05:11 +0000 (0:00:00.300) 0:00:07.241 ********** 2025-06-03 16:05:18.251850 | orchestrator | ok: [testbed-node-3] 2025-06-03 16:05:18.251856 | orchestrator | ok: [testbed-node-4] 2025-06-03 16:05:18.251863 | orchestrator | ok: [testbed-node-5] 2025-06-03 16:05:18.251869 | orchestrator | 2025-06-03 16:05:18.251891 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2025-06-03 16:05:18.251896 | orchestrator | Tuesday 03 June 2025 16:05:11 +0000 (0:00:00.506) 0:00:07.747 ********** 2025-06-03 16:05:18.251900 | orchestrator | skipping: [testbed-node-3] 2025-06-03 16:05:18.251914 | orchestrator | skipping: [testbed-node-4] 2025-06-03 16:05:18.251919 | orchestrator | skipping: [testbed-node-5] 2025-06-03 16:05:18.251924 | orchestrator | 2025-06-03 16:05:18.251928 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2025-06-03 16:05:18.251933 | orchestrator | Tuesday 03 June 2025 16:05:12 +0000 (0:00:00.299) 0:00:08.047 ********** 2025-06-03 16:05:18.251937 | orchestrator | skipping: [testbed-node-3] 2025-06-03 16:05:18.251941 | orchestrator | skipping: [testbed-node-4] 2025-06-03 16:05:18.251946 | orchestrator | skipping: [testbed-node-5] 2025-06-03 16:05:18.251950 | orchestrator | 2025-06-03 16:05:18.251955 | orchestrator | TASK [Set test result to passed if all containers are running] ***************** 2025-06-03 16:05:18.251959 | orchestrator | Tuesday 03 June 2025 16:05:12 +0000 (0:00:00.303) 0:00:08.350 ********** 2025-06-03 16:05:18.251963 | orchestrator | ok: [testbed-node-3] 2025-06-03 16:05:18.251968 | orchestrator | ok: [testbed-node-4] 2025-06-03 16:05:18.251972 | orchestrator | ok: [testbed-node-5] 2025-06-03 16:05:18.251976 | orchestrator | 2025-06-03 16:05:18.251981 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-06-03 16:05:18.251986 | orchestrator | Tuesday 03 June 2025 16:05:12 +0000 (0:00:00.317) 0:00:08.667 ********** 2025-06-03 16:05:18.251994 | orchestrator | skipping: [testbed-node-3] 2025-06-03 16:05:18.252016 | orchestrator | 2025-06-03 16:05:18.252021 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-06-03 16:05:18.252026 | orchestrator | Tuesday 03 June 2025 16:05:13 +0000 (0:00:00.727) 0:00:09.395 ********** 2025-06-03 16:05:18.252030 | orchestrator | skipping: [testbed-node-3] 2025-06-03 16:05:18.252035 | orchestrator | 2025-06-03 16:05:18.252039 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-06-03 16:05:18.252043 | orchestrator | Tuesday 03 June 2025 16:05:13 +0000 (0:00:00.259) 0:00:09.655 ********** 2025-06-03 16:05:18.252048 | orchestrator | skipping: [testbed-node-3] 2025-06-03 16:05:18.252052 | orchestrator | 2025-06-03 16:05:18.252056 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-03 16:05:18.252061 | orchestrator | Tuesday 03 June 2025 16:05:13 +0000 (0:00:00.273) 0:00:09.928 ********** 2025-06-03 16:05:18.252065 | orchestrator | 2025-06-03 16:05:18.252070 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-03 16:05:18.252074 | orchestrator | Tuesday 03 June 2025 16:05:14 +0000 (0:00:00.066) 0:00:09.994 ********** 2025-06-03 16:05:18.252078 | orchestrator | 2025-06-03 16:05:18.252083 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-03 16:05:18.252087 | orchestrator | Tuesday 03 June 2025 16:05:14 +0000 (0:00:00.084) 0:00:10.079 ********** 2025-06-03 16:05:18.252091 | orchestrator | 2025-06-03 16:05:18.252096 | orchestrator | TASK [Print report file information] ******************************************* 2025-06-03 16:05:18.252100 | orchestrator | Tuesday 03 June 2025 16:05:14 +0000 (0:00:00.069) 0:00:10.148 ********** 2025-06-03 16:05:18.252104 | orchestrator | skipping: [testbed-node-3] 2025-06-03 16:05:18.252109 | orchestrator | 2025-06-03 16:05:18.252113 | orchestrator | TASK [Fail early due to containers not running] ******************************** 2025-06-03 16:05:18.252117 | orchestrator | Tuesday 03 June 2025 16:05:14 +0000 (0:00:00.246) 0:00:10.395 ********** 2025-06-03 16:05:18.252122 | orchestrator | skipping: [testbed-node-3] 2025-06-03 16:05:18.252126 | orchestrator | 2025-06-03 16:05:18.252130 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-06-03 16:05:18.252135 | orchestrator | Tuesday 03 June 2025 16:05:14 +0000 (0:00:00.251) 0:00:10.647 ********** 2025-06-03 16:05:18.252139 | orchestrator | ok: [testbed-node-3] 2025-06-03 16:05:18.252143 | orchestrator | ok: [testbed-node-4] 2025-06-03 16:05:18.252147 | orchestrator | ok: [testbed-node-5] 2025-06-03 16:05:18.252151 | orchestrator | 2025-06-03 16:05:18.252154 | orchestrator | TASK [Set _mon_hostname fact] ************************************************** 2025-06-03 16:05:18.252158 | orchestrator | Tuesday 03 June 2025 16:05:14 +0000 (0:00:00.307) 0:00:10.954 ********** 2025-06-03 16:05:18.252162 | orchestrator | ok: [testbed-node-3] 2025-06-03 16:05:18.252170 | orchestrator | 2025-06-03 16:05:18.252174 | orchestrator | TASK [Get ceph osd tree] ******************************************************* 2025-06-03 16:05:18.252178 | orchestrator | Tuesday 03 June 2025 16:05:15 +0000 (0:00:00.645) 0:00:11.600 ********** 2025-06-03 16:05:18.252182 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-06-03 16:05:18.252186 | orchestrator | 2025-06-03 16:05:18.252189 | orchestrator | TASK [Parse osd tree from JSON] ************************************************ 2025-06-03 16:05:18.252193 | orchestrator | Tuesday 03 June 2025 16:05:17 +0000 (0:00:01.638) 0:00:13.239 ********** 2025-06-03 16:05:18.252197 | orchestrator | ok: [testbed-node-3] 2025-06-03 16:05:18.252200 | orchestrator | 2025-06-03 16:05:18.252204 | orchestrator | TASK [Get OSDs that are not up or in] ****************************************** 2025-06-03 16:05:18.252208 | orchestrator | Tuesday 03 June 2025 16:05:17 +0000 (0:00:00.123) 0:00:13.363 ********** 2025-06-03 16:05:18.252211 | orchestrator | ok: [testbed-node-3] 2025-06-03 16:05:18.252215 | orchestrator | 2025-06-03 16:05:18.252219 | orchestrator | TASK [Fail test if OSDs are not up or in] ************************************** 2025-06-03 16:05:18.252223 | orchestrator | Tuesday 03 June 2025 16:05:17 +0000 (0:00:00.319) 0:00:13.682 ********** 2025-06-03 16:05:18.252226 | orchestrator | skipping: [testbed-node-3] 2025-06-03 16:05:18.252230 | orchestrator | 2025-06-03 16:05:18.252234 | orchestrator | TASK [Pass test if OSDs are all up and in] ************************************* 2025-06-03 16:05:18.252238 | orchestrator | Tuesday 03 June 2025 16:05:17 +0000 (0:00:00.118) 0:00:13.800 ********** 2025-06-03 16:05:18.252241 | orchestrator | ok: [testbed-node-3] 2025-06-03 16:05:18.252245 | orchestrator | 2025-06-03 16:05:18.252249 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-06-03 16:05:18.252252 | orchestrator | Tuesday 03 June 2025 16:05:17 +0000 (0:00:00.133) 0:00:13.934 ********** 2025-06-03 16:05:18.252256 | orchestrator | ok: [testbed-node-3] 2025-06-03 16:05:18.252260 | orchestrator | ok: [testbed-node-4] 2025-06-03 16:05:18.252264 | orchestrator | ok: [testbed-node-5] 2025-06-03 16:05:18.252267 | orchestrator | 2025-06-03 16:05:18.252271 | orchestrator | TASK [List ceph LVM volumes and collect data] ********************************** 2025-06-03 16:05:18.252278 | orchestrator | Tuesday 03 June 2025 16:05:18 +0000 (0:00:00.293) 0:00:14.227 ********** 2025-06-03 16:05:30.755480 | orchestrator | changed: [testbed-node-4] 2025-06-03 16:05:30.755589 | orchestrator | changed: [testbed-node-3] 2025-06-03 16:05:30.755597 | orchestrator | changed: [testbed-node-5] 2025-06-03 16:05:30.755603 | orchestrator | 2025-06-03 16:05:30.755609 | orchestrator | TASK [Parse LVM data as JSON] ************************************************** 2025-06-03 16:05:30.755616 | orchestrator | Tuesday 03 June 2025 16:05:20 +0000 (0:00:02.727) 0:00:16.955 ********** 2025-06-03 16:05:30.755622 | orchestrator | ok: [testbed-node-3] 2025-06-03 16:05:30.755628 | orchestrator | ok: [testbed-node-4] 2025-06-03 16:05:30.755634 | orchestrator | ok: [testbed-node-5] 2025-06-03 16:05:30.755639 | orchestrator | 2025-06-03 16:05:30.755644 | orchestrator | TASK [Get unencrypted and encrypted OSDs] ************************************** 2025-06-03 16:05:30.755649 | orchestrator | Tuesday 03 June 2025 16:05:21 +0000 (0:00:00.312) 0:00:17.267 ********** 2025-06-03 16:05:30.755654 | orchestrator | ok: [testbed-node-3] 2025-06-03 16:05:30.755659 | orchestrator | ok: [testbed-node-4] 2025-06-03 16:05:30.755664 | orchestrator | ok: [testbed-node-5] 2025-06-03 16:05:30.755669 | orchestrator | 2025-06-03 16:05:30.755674 | orchestrator | TASK [Fail if count of encrypted OSDs does not match] ************************** 2025-06-03 16:05:30.755679 | orchestrator | Tuesday 03 June 2025 16:05:21 +0000 (0:00:00.478) 0:00:17.746 ********** 2025-06-03 16:05:30.755685 | orchestrator | skipping: [testbed-node-3] 2025-06-03 16:05:30.755690 | orchestrator | skipping: [testbed-node-4] 2025-06-03 16:05:30.755711 | orchestrator | skipping: [testbed-node-5] 2025-06-03 16:05:30.755717 | orchestrator | 2025-06-03 16:05:30.755722 | orchestrator | TASK [Pass if count of encrypted OSDs equals count of OSDs] ******************** 2025-06-03 16:05:30.755727 | orchestrator | Tuesday 03 June 2025 16:05:22 +0000 (0:00:00.293) 0:00:18.039 ********** 2025-06-03 16:05:30.755732 | orchestrator | ok: [testbed-node-3] 2025-06-03 16:05:30.755756 | orchestrator | ok: [testbed-node-4] 2025-06-03 16:05:30.755762 | orchestrator | ok: [testbed-node-5] 2025-06-03 16:05:30.755767 | orchestrator | 2025-06-03 16:05:30.755772 | orchestrator | TASK [Fail if count of unencrypted OSDs does not match] ************************ 2025-06-03 16:05:30.755777 | orchestrator | Tuesday 03 June 2025 16:05:22 +0000 (0:00:00.497) 0:00:18.537 ********** 2025-06-03 16:05:30.755782 | orchestrator | skipping: [testbed-node-3] 2025-06-03 16:05:30.755787 | orchestrator | skipping: [testbed-node-4] 2025-06-03 16:05:30.755792 | orchestrator | skipping: [testbed-node-5] 2025-06-03 16:05:30.755797 | orchestrator | 2025-06-03 16:05:30.755803 | orchestrator | TASK [Pass if count of unencrypted OSDs equals count of OSDs] ****************** 2025-06-03 16:05:30.755808 | orchestrator | Tuesday 03 June 2025 16:05:22 +0000 (0:00:00.292) 0:00:18.829 ********** 2025-06-03 16:05:30.755813 | orchestrator | skipping: [testbed-node-3] 2025-06-03 16:05:30.755818 | orchestrator | skipping: [testbed-node-4] 2025-06-03 16:05:30.755823 | orchestrator | skipping: [testbed-node-5] 2025-06-03 16:05:30.755828 | orchestrator | 2025-06-03 16:05:30.755833 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-06-03 16:05:30.755838 | orchestrator | Tuesday 03 June 2025 16:05:23 +0000 (0:00:00.276) 0:00:19.106 ********** 2025-06-03 16:05:30.755843 | orchestrator | ok: [testbed-node-3] 2025-06-03 16:05:30.755848 | orchestrator | ok: [testbed-node-4] 2025-06-03 16:05:30.755853 | orchestrator | ok: [testbed-node-5] 2025-06-03 16:05:30.755858 | orchestrator | 2025-06-03 16:05:30.755863 | orchestrator | TASK [Get CRUSH node data of each OSD host and root node childs] *************** 2025-06-03 16:05:30.755868 | orchestrator | Tuesday 03 June 2025 16:05:23 +0000 (0:00:00.482) 0:00:19.588 ********** 2025-06-03 16:05:30.755873 | orchestrator | ok: [testbed-node-3] 2025-06-03 16:05:30.755878 | orchestrator | ok: [testbed-node-4] 2025-06-03 16:05:30.755883 | orchestrator | ok: [testbed-node-5] 2025-06-03 16:05:30.755888 | orchestrator | 2025-06-03 16:05:30.755893 | orchestrator | TASK [Calculate sub test expression results] *********************************** 2025-06-03 16:05:30.755899 | orchestrator | Tuesday 03 June 2025 16:05:24 +0000 (0:00:00.738) 0:00:20.327 ********** 2025-06-03 16:05:30.755904 | orchestrator | ok: [testbed-node-3] 2025-06-03 16:05:30.755908 | orchestrator | ok: [testbed-node-4] 2025-06-03 16:05:30.755913 | orchestrator | ok: [testbed-node-5] 2025-06-03 16:05:30.755918 | orchestrator | 2025-06-03 16:05:30.755923 | orchestrator | TASK [Fail test if any sub test failed] **************************************** 2025-06-03 16:05:30.755928 | orchestrator | Tuesday 03 June 2025 16:05:24 +0000 (0:00:00.284) 0:00:20.611 ********** 2025-06-03 16:05:30.755933 | orchestrator | skipping: [testbed-node-3] 2025-06-03 16:05:30.755939 | orchestrator | skipping: [testbed-node-4] 2025-06-03 16:05:30.755944 | orchestrator | skipping: [testbed-node-5] 2025-06-03 16:05:30.755949 | orchestrator | 2025-06-03 16:05:30.755953 | orchestrator | TASK [Pass test if no sub test failed] ***************************************** 2025-06-03 16:05:30.755959 | orchestrator | Tuesday 03 June 2025 16:05:24 +0000 (0:00:00.294) 0:00:20.906 ********** 2025-06-03 16:05:30.755963 | orchestrator | ok: [testbed-node-3] 2025-06-03 16:05:30.755969 | orchestrator | ok: [testbed-node-4] 2025-06-03 16:05:30.755974 | orchestrator | ok: [testbed-node-5] 2025-06-03 16:05:30.755979 | orchestrator | 2025-06-03 16:05:30.755984 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-06-03 16:05:30.755989 | orchestrator | Tuesday 03 June 2025 16:05:25 +0000 (0:00:00.300) 0:00:21.207 ********** 2025-06-03 16:05:30.755994 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-03 16:05:30.755999 | orchestrator | 2025-06-03 16:05:30.756004 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-06-03 16:05:30.756010 | orchestrator | Tuesday 03 June 2025 16:05:25 +0000 (0:00:00.691) 0:00:21.898 ********** 2025-06-03 16:05:30.756039 | orchestrator | skipping: [testbed-node-3] 2025-06-03 16:05:30.756046 | orchestrator | 2025-06-03 16:05:30.756052 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-06-03 16:05:30.756058 | orchestrator | Tuesday 03 June 2025 16:05:26 +0000 (0:00:00.250) 0:00:22.149 ********** 2025-06-03 16:05:30.756069 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-03 16:05:30.756075 | orchestrator | 2025-06-03 16:05:30.756081 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-06-03 16:05:30.756087 | orchestrator | Tuesday 03 June 2025 16:05:27 +0000 (0:00:01.553) 0:00:23.702 ********** 2025-06-03 16:05:30.756093 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-03 16:05:30.756099 | orchestrator | 2025-06-03 16:05:30.756105 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-06-03 16:05:30.756111 | orchestrator | Tuesday 03 June 2025 16:05:27 +0000 (0:00:00.260) 0:00:23.962 ********** 2025-06-03 16:05:30.756132 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-03 16:05:30.756139 | orchestrator | 2025-06-03 16:05:30.756145 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-03 16:05:30.756151 | orchestrator | Tuesday 03 June 2025 16:05:28 +0000 (0:00:00.259) 0:00:24.222 ********** 2025-06-03 16:05:30.756156 | orchestrator | 2025-06-03 16:05:30.756163 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-03 16:05:30.756169 | orchestrator | Tuesday 03 June 2025 16:05:28 +0000 (0:00:00.067) 0:00:24.289 ********** 2025-06-03 16:05:30.756175 | orchestrator | 2025-06-03 16:05:30.756181 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-03 16:05:30.756186 | orchestrator | Tuesday 03 June 2025 16:05:28 +0000 (0:00:00.067) 0:00:24.357 ********** 2025-06-03 16:05:30.756192 | orchestrator | 2025-06-03 16:05:30.756198 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-06-03 16:05:30.756204 | orchestrator | Tuesday 03 June 2025 16:05:28 +0000 (0:00:00.074) 0:00:24.431 ********** 2025-06-03 16:05:30.756210 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-03 16:05:30.756216 | orchestrator | 2025-06-03 16:05:30.756222 | orchestrator | TASK [Print report file information] ******************************************* 2025-06-03 16:05:30.756228 | orchestrator | Tuesday 03 June 2025 16:05:29 +0000 (0:00:01.365) 0:00:25.797 ********** 2025-06-03 16:05:30.756234 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => { 2025-06-03 16:05:30.756240 | orchestrator |  "msg": [ 2025-06-03 16:05:30.756246 | orchestrator |  "Validator run completed.", 2025-06-03 16:05:30.756253 | orchestrator |  "You can find the report file here:", 2025-06-03 16:05:30.756260 | orchestrator |  "/opt/reports/validator/ceph-osds-validator-2025-06-03T16:05:04+00:00-report.json", 2025-06-03 16:05:30.756267 | orchestrator |  "on the following host:", 2025-06-03 16:05:30.756273 | orchestrator |  "testbed-manager" 2025-06-03 16:05:30.756279 | orchestrator |  ] 2025-06-03 16:05:30.756285 | orchestrator | } 2025-06-03 16:05:30.756292 | orchestrator | 2025-06-03 16:05:30.756298 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-03 16:05:30.756305 | orchestrator | testbed-node-3 : ok=35  changed=4  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2025-06-03 16:05:30.756314 | orchestrator | testbed-node-4 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-06-03 16:05:30.756356 | orchestrator | testbed-node-5 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-06-03 16:05:30.756365 | orchestrator | 2025-06-03 16:05:30.756374 | orchestrator | 2025-06-03 16:05:30.756383 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-03 16:05:30.756392 | orchestrator | Tuesday 03 June 2025 16:05:30 +0000 (0:00:00.600) 0:00:26.398 ********** 2025-06-03 16:05:30.756401 | orchestrator | =============================================================================== 2025-06-03 16:05:30.756410 | orchestrator | List ceph LVM volumes and collect data ---------------------------------- 2.73s 2025-06-03 16:05:30.756418 | orchestrator | Get ceph osd tree ------------------------------------------------------- 1.64s 2025-06-03 16:05:30.756433 | orchestrator | Aggregate test results step one ----------------------------------------- 1.55s 2025-06-03 16:05:30.756442 | orchestrator | Write report file ------------------------------------------------------- 1.37s 2025-06-03 16:05:30.756450 | orchestrator | Create report output directory ------------------------------------------ 1.02s 2025-06-03 16:05:30.756458 | orchestrator | Get CRUSH node data of each OSD host and root node childs --------------- 0.74s 2025-06-03 16:05:30.756466 | orchestrator | Aggregate test results step one ----------------------------------------- 0.73s 2025-06-03 16:05:30.756474 | orchestrator | Set validation result to passed if no test failed ----------------------- 0.69s 2025-06-03 16:05:30.756483 | orchestrator | Calculate total number of OSDs in cluster ------------------------------- 0.66s 2025-06-03 16:05:30.756491 | orchestrator | Set _mon_hostname fact -------------------------------------------------- 0.65s 2025-06-03 16:05:30.756500 | orchestrator | Get timestamp for report file ------------------------------------------- 0.63s 2025-06-03 16:05:30.756508 | orchestrator | Get list of ceph-osd containers on host --------------------------------- 0.61s 2025-06-03 16:05:30.756517 | orchestrator | Print report file information ------------------------------------------- 0.60s 2025-06-03 16:05:30.756525 | orchestrator | Prepare test data ------------------------------------------------------- 0.60s 2025-06-03 16:05:30.756533 | orchestrator | Set test result to failed when count of containers is wrong ------------- 0.51s 2025-06-03 16:05:30.756542 | orchestrator | Get count of ceph-osd containers that are not running ------------------- 0.51s 2025-06-03 16:05:30.756550 | orchestrator | Pass if count of encrypted OSDs equals count of OSDs -------------------- 0.50s 2025-06-03 16:05:30.756558 | orchestrator | Prepare test data ------------------------------------------------------- 0.48s 2025-06-03 16:05:30.756567 | orchestrator | Get unencrypted and encrypted OSDs -------------------------------------- 0.48s 2025-06-03 16:05:30.756575 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.43s 2025-06-03 16:05:31.027304 | orchestrator | + sh -c /opt/configuration/scripts/check/200-infrastructure.sh 2025-06-03 16:05:31.034706 | orchestrator | + set -e 2025-06-03 16:05:31.034816 | orchestrator | + source /opt/manager-vars.sh 2025-06-03 16:05:31.034827 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-06-03 16:05:31.034835 | orchestrator | ++ NUMBER_OF_NODES=6 2025-06-03 16:05:31.034842 | orchestrator | ++ export CEPH_VERSION=reef 2025-06-03 16:05:31.034849 | orchestrator | ++ CEPH_VERSION=reef 2025-06-03 16:05:31.034857 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-06-03 16:05:31.034866 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-06-03 16:05:31.034873 | orchestrator | ++ export MANAGER_VERSION=latest 2025-06-03 16:05:31.034880 | orchestrator | ++ MANAGER_VERSION=latest 2025-06-03 16:05:31.034887 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-06-03 16:05:31.034894 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-06-03 16:05:31.034901 | orchestrator | ++ export ARA=false 2025-06-03 16:05:31.034908 | orchestrator | ++ ARA=false 2025-06-03 16:05:31.034915 | orchestrator | ++ export DEPLOY_MODE=manager 2025-06-03 16:05:31.034922 | orchestrator | ++ DEPLOY_MODE=manager 2025-06-03 16:05:31.034929 | orchestrator | ++ export TEMPEST=false 2025-06-03 16:05:31.034936 | orchestrator | ++ TEMPEST=false 2025-06-03 16:05:31.034943 | orchestrator | ++ export IS_ZUUL=true 2025-06-03 16:05:31.034949 | orchestrator | ++ IS_ZUUL=true 2025-06-03 16:05:31.034956 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.16 2025-06-03 16:05:31.034963 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.16 2025-06-03 16:05:31.034970 | orchestrator | ++ export EXTERNAL_API=false 2025-06-03 16:05:31.034977 | orchestrator | ++ EXTERNAL_API=false 2025-06-03 16:05:31.034984 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-06-03 16:05:31.034991 | orchestrator | ++ IMAGE_USER=ubuntu 2025-06-03 16:05:31.034998 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-06-03 16:05:31.035004 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-06-03 16:05:31.035011 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-06-03 16:05:31.035056 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-06-03 16:05:31.035064 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-06-03 16:05:31.035071 | orchestrator | + source /etc/os-release 2025-06-03 16:05:31.035078 | orchestrator | ++ PRETTY_NAME='Ubuntu 24.04.2 LTS' 2025-06-03 16:05:31.035084 | orchestrator | ++ NAME=Ubuntu 2025-06-03 16:05:31.035091 | orchestrator | ++ VERSION_ID=24.04 2025-06-03 16:05:31.035125 | orchestrator | ++ VERSION='24.04.2 LTS (Noble Numbat)' 2025-06-03 16:05:31.035132 | orchestrator | ++ VERSION_CODENAME=noble 2025-06-03 16:05:31.035139 | orchestrator | ++ ID=ubuntu 2025-06-03 16:05:31.035145 | orchestrator | ++ ID_LIKE=debian 2025-06-03 16:05:31.035152 | orchestrator | ++ HOME_URL=https://www.ubuntu.com/ 2025-06-03 16:05:31.035159 | orchestrator | ++ SUPPORT_URL=https://help.ubuntu.com/ 2025-06-03 16:05:31.035181 | orchestrator | ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 2025-06-03 16:05:31.035189 | orchestrator | ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 2025-06-03 16:05:31.035197 | orchestrator | ++ UBUNTU_CODENAME=noble 2025-06-03 16:05:31.035204 | orchestrator | ++ LOGO=ubuntu-logo 2025-06-03 16:05:31.035210 | orchestrator | + [[ ubuntu == \u\b\u\n\t\u ]] 2025-06-03 16:05:31.035218 | orchestrator | + packages='libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client' 2025-06-03 16:05:31.035227 | orchestrator | + dpkg -s libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2025-06-03 16:05:31.059628 | orchestrator | + sudo apt-get install -y libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2025-06-03 16:05:55.860244 | orchestrator | 2025-06-03 16:05:55.860347 | orchestrator | # Status of Elasticsearch 2025-06-03 16:05:55.860360 | orchestrator | 2025-06-03 16:05:55.860370 | orchestrator | + pushd /opt/configuration/contrib 2025-06-03 16:05:55.860380 | orchestrator | + echo 2025-06-03 16:05:55.860389 | orchestrator | + echo '# Status of Elasticsearch' 2025-06-03 16:05:55.860397 | orchestrator | + echo 2025-06-03 16:05:55.860407 | orchestrator | + bash nagios-plugins/check_elasticsearch -H api-int.testbed.osism.xyz -s 2025-06-03 16:05:56.070486 | orchestrator | OK - elasticsearch (kolla_logging) is running. status: green; timed_out: false; number_of_nodes: 3; number_of_data_nodes: 3; active_primary_shards: 9; active_shards: 22; relocating_shards: 0; initializing_shards: 0; delayed_unassigned_shards: 0; unassigned_shards: 0 | 'active_primary'=9 'active'=22 'relocating'=0 'init'=0 'delay_unass'=0 'unass'=0 2025-06-03 16:05:56.070689 | orchestrator | 2025-06-03 16:05:56.070702 | orchestrator | # Status of MariaDB 2025-06-03 16:05:56.070711 | orchestrator | 2025-06-03 16:05:56.070718 | orchestrator | + echo 2025-06-03 16:05:56.070724 | orchestrator | + echo '# Status of MariaDB' 2025-06-03 16:05:56.070728 | orchestrator | + echo 2025-06-03 16:05:56.070732 | orchestrator | + MARIADB_USER=root_shard_0 2025-06-03 16:05:56.070738 | orchestrator | + bash nagios-plugins/check_galera_cluster -u root_shard_0 -p password -H api-int.testbed.osism.xyz -c 1 2025-06-03 16:05:56.149992 | orchestrator | Reading package lists... 2025-06-03 16:05:56.576772 | orchestrator | Building dependency tree... 2025-06-03 16:05:56.577755 | orchestrator | Reading state information... 2025-06-03 16:05:57.069219 | orchestrator | bc is already the newest version (1.07.1-3ubuntu4). 2025-06-03 16:05:57.069311 | orchestrator | bc set to manually installed. 2025-06-03 16:05:57.069323 | orchestrator | 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 2025-06-03 16:05:57.774777 | orchestrator | OK: number of NODES = 3 (wsrep_cluster_size) 2025-06-03 16:05:57.774855 | orchestrator | 2025-06-03 16:05:57.774864 | orchestrator | + echo 2025-06-03 16:05:57.774871 | orchestrator | + echo '# Status of Prometheus' 2025-06-03 16:05:57.774884 | orchestrator | # Status of Prometheus 2025-06-03 16:05:57.774890 | orchestrator | 2025-06-03 16:05:57.774896 | orchestrator | + echo 2025-06-03 16:05:57.774902 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/healthy 2025-06-03 16:05:57.837591 | orchestrator | Unauthorized 2025-06-03 16:05:57.840550 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/ready 2025-06-03 16:05:57.903320 | orchestrator | Unauthorized 2025-06-03 16:05:57.907439 | orchestrator | 2025-06-03 16:05:57.907464 | orchestrator | # Status of RabbitMQ 2025-06-03 16:05:57.907470 | orchestrator | 2025-06-03 16:05:57.907475 | orchestrator | + echo 2025-06-03 16:05:57.907480 | orchestrator | + echo '# Status of RabbitMQ' 2025-06-03 16:05:57.907484 | orchestrator | + echo 2025-06-03 16:05:57.907490 | orchestrator | + perl nagios-plugins/check_rabbitmq_cluster --ssl 1 -H api-int.testbed.osism.xyz -u openstack -p password 2025-06-03 16:05:58.369801 | orchestrator | RABBITMQ_CLUSTER OK - nb_running_node OK (3) nb_running_disc_node OK (3) nb_running_ram_node OK (0) 2025-06-03 16:05:58.378220 | orchestrator | 2025-06-03 16:05:58.378308 | orchestrator | # Status of Redis 2025-06-03 16:05:58.378322 | orchestrator | 2025-06-03 16:05:58.378335 | orchestrator | + echo 2025-06-03 16:05:58.378347 | orchestrator | + echo '# Status of Redis' 2025-06-03 16:05:58.378359 | orchestrator | + echo 2025-06-03 16:05:58.378372 | orchestrator | + /usr/lib/nagios/plugins/check_tcp -H 192.168.16.10 -p 6379 -A -E -s 'AUTH QHNA1SZRlOKzLADhUd5ZDgpHfQe6dNfr3bwEdY24\r\nPING\r\nINFO replication\r\nQUIT\r\n' -e PONG -e role:master -e slave0:ip=192.168.16.1 -e,port=6379 -j 2025-06-03 16:05:58.382870 | orchestrator | TCP OK - 0.002 second response time on 192.168.16.10 port 6379|time=0.001860s;;;0.000000;10.000000 2025-06-03 16:05:58.383460 | orchestrator | + popd 2025-06-03 16:05:58.383505 | orchestrator | 2025-06-03 16:05:58.383527 | orchestrator | # Create backup of MariaDB database 2025-06-03 16:05:58.383546 | orchestrator | 2025-06-03 16:05:58.383562 | orchestrator | + echo 2025-06-03 16:05:58.383573 | orchestrator | + echo '# Create backup of MariaDB database' 2025-06-03 16:05:58.383584 | orchestrator | + echo 2025-06-03 16:05:58.383596 | orchestrator | + osism apply mariadb_backup -e mariadb_backup_type=full 2025-06-03 16:06:00.481103 | orchestrator | 2025-06-03 16:06:00 | INFO  | Task aa608e63-c129-404b-a902-479c45a0c470 (mariadb_backup) was prepared for execution. 2025-06-03 16:06:00.481184 | orchestrator | 2025-06-03 16:06:00 | INFO  | It takes a moment until task aa608e63-c129-404b-a902-479c45a0c470 (mariadb_backup) has been started and output is visible here. 2025-06-03 16:06:04.282254 | orchestrator | 2025-06-03 16:06:04.284408 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-03 16:06:04.285325 | orchestrator | 2025-06-03 16:06:04.285911 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-03 16:06:04.286648 | orchestrator | Tuesday 03 June 2025 16:06:04 +0000 (0:00:00.176) 0:00:00.176 ********** 2025-06-03 16:06:04.488920 | orchestrator | ok: [testbed-node-0] 2025-06-03 16:06:04.607628 | orchestrator | ok: [testbed-node-1] 2025-06-03 16:06:04.608813 | orchestrator | ok: [testbed-node-2] 2025-06-03 16:06:04.610143 | orchestrator | 2025-06-03 16:06:04.611397 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-03 16:06:04.612014 | orchestrator | Tuesday 03 June 2025 16:06:04 +0000 (0:00:00.327) 0:00:00.503 ********** 2025-06-03 16:06:05.168802 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-06-03 16:06:05.169230 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-06-03 16:06:05.170525 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-06-03 16:06:05.170545 | orchestrator | 2025-06-03 16:06:05.171209 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-06-03 16:06:05.171744 | orchestrator | 2025-06-03 16:06:05.172752 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-06-03 16:06:05.173025 | orchestrator | Tuesday 03 June 2025 16:06:05 +0000 (0:00:00.559) 0:00:01.063 ********** 2025-06-03 16:06:05.595658 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-06-03 16:06:05.595774 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-06-03 16:06:05.596057 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-06-03 16:06:05.597185 | orchestrator | 2025-06-03 16:06:05.597676 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-06-03 16:06:05.601707 | orchestrator | Tuesday 03 June 2025 16:06:05 +0000 (0:00:00.427) 0:00:01.490 ********** 2025-06-03 16:06:06.145011 | orchestrator | included: /ansible/roles/mariadb/tasks/backup.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-03 16:06:06.147940 | orchestrator | 2025-06-03 16:06:06.148005 | orchestrator | TASK [mariadb : Get MariaDB container facts] *********************************** 2025-06-03 16:06:06.148018 | orchestrator | Tuesday 03 June 2025 16:06:06 +0000 (0:00:00.548) 0:00:02.038 ********** 2025-06-03 16:06:09.469869 | orchestrator | ok: [testbed-node-2] 2025-06-03 16:06:09.471521 | orchestrator | ok: [testbed-node-0] 2025-06-03 16:06:09.471893 | orchestrator | ok: [testbed-node-1] 2025-06-03 16:06:09.474688 | orchestrator | 2025-06-03 16:06:09.477723 | orchestrator | TASK [mariadb : Taking full database backup via Mariabackup] ******************* 2025-06-03 16:06:09.477780 | orchestrator | Tuesday 03 June 2025 16:06:09 +0000 (0:00:03.321) 0:00:05.359 ********** 2025-06-03 16:06:27.393228 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-06-03 16:06:27.393332 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2025-06-03 16:06:27.393370 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-06-03 16:06:27.393951 | orchestrator | mariadb_bootstrap_restart 2025-06-03 16:06:27.459626 | orchestrator | skipping: [testbed-node-1] 2025-06-03 16:06:27.460350 | orchestrator | skipping: [testbed-node-2] 2025-06-03 16:06:27.461482 | orchestrator | changed: [testbed-node-0] 2025-06-03 16:06:27.464447 | orchestrator | 2025-06-03 16:06:27.464764 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-06-03 16:06:27.465701 | orchestrator | skipping: no hosts matched 2025-06-03 16:06:27.466328 | orchestrator | 2025-06-03 16:06:27.466873 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-06-03 16:06:27.471374 | orchestrator | skipping: no hosts matched 2025-06-03 16:06:27.471418 | orchestrator | 2025-06-03 16:06:27.471426 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-06-03 16:06:27.471498 | orchestrator | skipping: no hosts matched 2025-06-03 16:06:27.471812 | orchestrator | 2025-06-03 16:06:27.472827 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-06-03 16:06:27.472875 | orchestrator | 2025-06-03 16:06:27.473322 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-06-03 16:06:27.473636 | orchestrator | Tuesday 03 June 2025 16:06:27 +0000 (0:00:17.995) 0:00:23.355 ********** 2025-06-03 16:06:27.632757 | orchestrator | skipping: [testbed-node-0] 2025-06-03 16:06:27.739503 | orchestrator | skipping: [testbed-node-1] 2025-06-03 16:06:27.742548 | orchestrator | skipping: [testbed-node-2] 2025-06-03 16:06:27.742606 | orchestrator | 2025-06-03 16:06:27.742615 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-06-03 16:06:27.742625 | orchestrator | Tuesday 03 June 2025 16:06:27 +0000 (0:00:00.279) 0:00:23.634 ********** 2025-06-03 16:06:28.006745 | orchestrator | skipping: [testbed-node-0] 2025-06-03 16:06:28.049579 | orchestrator | skipping: [testbed-node-1] 2025-06-03 16:06:28.050718 | orchestrator | skipping: [testbed-node-2] 2025-06-03 16:06:28.052686 | orchestrator | 2025-06-03 16:06:28.054410 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-03 16:06:28.054580 | orchestrator | 2025-06-03 16:06:28 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-03 16:06:28.055010 | orchestrator | 2025-06-03 16:06:28 | INFO  | Please wait and do not abort execution. 2025-06-03 16:06:28.055930 | orchestrator | testbed-node-0 : ok=6  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-03 16:06:28.056576 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-03 16:06:28.057303 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-03 16:06:28.057802 | orchestrator | 2025-06-03 16:06:28.058607 | orchestrator | 2025-06-03 16:06:28.058986 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-03 16:06:28.059498 | orchestrator | Tuesday 03 June 2025 16:06:28 +0000 (0:00:00.310) 0:00:23.945 ********** 2025-06-03 16:06:28.060253 | orchestrator | =============================================================================== 2025-06-03 16:06:28.060638 | orchestrator | mariadb : Taking full database backup via Mariabackup ------------------ 18.00s 2025-06-03 16:06:28.061140 | orchestrator | mariadb : Get MariaDB container facts ----------------------------------- 3.32s 2025-06-03 16:06:28.061829 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.56s 2025-06-03 16:06:28.062311 | orchestrator | mariadb : include_tasks ------------------------------------------------- 0.55s 2025-06-03 16:06:28.062878 | orchestrator | mariadb : Group MariaDB hosts based on shards --------------------------- 0.43s 2025-06-03 16:06:28.063129 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.33s 2025-06-03 16:06:28.063604 | orchestrator | Include mariadb post-upgrade.yml ---------------------------------------- 0.31s 2025-06-03 16:06:28.064033 | orchestrator | Include mariadb post-deploy.yml ----------------------------------------- 0.28s 2025-06-03 16:06:28.422734 | orchestrator | + sh -c /opt/configuration/scripts/check/300-openstack.sh 2025-06-03 16:06:28.431142 | orchestrator | + set -e 2025-06-03 16:06:28.432331 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-06-03 16:06:28.432378 | orchestrator | ++ export INTERACTIVE=false 2025-06-03 16:06:28.432387 | orchestrator | ++ INTERACTIVE=false 2025-06-03 16:06:28.432394 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-06-03 16:06:28.432400 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-06-03 16:06:28.432406 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-06-03 16:06:28.432844 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2025-06-03 16:06:28.439574 | orchestrator | 2025-06-03 16:06:28.439621 | orchestrator | # OpenStack endpoints 2025-06-03 16:06:28.439629 | orchestrator | 2025-06-03 16:06:28.439636 | orchestrator | ++ export MANAGER_VERSION=latest 2025-06-03 16:06:28.439642 | orchestrator | ++ MANAGER_VERSION=latest 2025-06-03 16:06:28.439668 | orchestrator | + export OS_CLOUD=admin 2025-06-03 16:06:28.439674 | orchestrator | + OS_CLOUD=admin 2025-06-03 16:06:28.439680 | orchestrator | + echo 2025-06-03 16:06:28.439686 | orchestrator | + echo '# OpenStack endpoints' 2025-06-03 16:06:28.439693 | orchestrator | + echo 2025-06-03 16:06:28.439699 | orchestrator | + openstack endpoint list 2025-06-03 16:06:32.104593 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2025-06-03 16:06:32.104696 | orchestrator | | ID | Region | Service Name | Service Type | Enabled | Interface | URL | 2025-06-03 16:06:32.104710 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2025-06-03 16:06:32.104721 | orchestrator | | 096e1433406840d198d8142e50e1d407 | RegionOne | keystone | identity | True | internal | https://api-int.testbed.osism.xyz:5000 | 2025-06-03 16:06:32.104731 | orchestrator | | 0d86748d46454c438516a6e92e80a31c | RegionOne | magnum | container-infra | True | public | https://api.testbed.osism.xyz:9511/v1 | 2025-06-03 16:06:32.104741 | orchestrator | | 1a4df9c0873e465687ecd072cb798d53 | RegionOne | cinderv3 | volumev3 | True | internal | https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2025-06-03 16:06:32.104751 | orchestrator | | 246f6f740c95436c85b7f7dafe8fa302 | RegionOne | octavia | load-balancer | True | public | https://api.testbed.osism.xyz:9876 | 2025-06-03 16:06:32.104761 | orchestrator | | 2f60cf85233146738aaaf71eb44428c4 | RegionOne | nova | compute | True | internal | https://api-int.testbed.osism.xyz:8774/v2.1 | 2025-06-03 16:06:32.104770 | orchestrator | | 4c779cbd4ece462bb13beacde83c944c | RegionOne | octavia | load-balancer | True | internal | https://api-int.testbed.osism.xyz:9876 | 2025-06-03 16:06:32.104783 | orchestrator | | 4ec939b92dfc40b4b1369164e150349d | RegionOne | designate | dns | True | public | https://api.testbed.osism.xyz:9001 | 2025-06-03 16:06:32.104800 | orchestrator | | 511258d3b0ab4a528ff8fe68ee4453ca | RegionOne | barbican | key-manager | True | internal | https://api-int.testbed.osism.xyz:9311 | 2025-06-03 16:06:32.104817 | orchestrator | | 67c338bc5a6e49dabcc4dc1884612627 | RegionOne | keystone | identity | True | public | https://api.testbed.osism.xyz:5000 | 2025-06-03 16:06:32.104832 | orchestrator | | 681e6ac080e14381a4229e8470de9370 | RegionOne | magnum | container-infra | True | internal | https://api-int.testbed.osism.xyz:9511/v1 | 2025-06-03 16:06:32.104848 | orchestrator | | 6a7e4fbd5e41468f9d87d35dcf3f4504 | RegionOne | placement | placement | True | public | https://api.testbed.osism.xyz:8780 | 2025-06-03 16:06:32.104897 | orchestrator | | 90872f601b8b4842a0b4d418cc4e073a | RegionOne | swift | object-store | True | internal | https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2025-06-03 16:06:32.104915 | orchestrator | | a4099323349b4931aa0ace8c7332ffcb | RegionOne | glance | image | True | public | https://api.testbed.osism.xyz:9292 | 2025-06-03 16:06:32.104929 | orchestrator | | aafff9600e814c668e751e58f0f13528 | RegionOne | cinderv3 | volumev3 | True | public | https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2025-06-03 16:06:32.104944 | orchestrator | | b56f09975f6040438a9f0d0c24925dc2 | RegionOne | neutron | network | True | internal | https://api-int.testbed.osism.xyz:9696 | 2025-06-03 16:06:32.104958 | orchestrator | | c9829871af8b4f428ca5226d47cb5f9f | RegionOne | designate | dns | True | internal | https://api-int.testbed.osism.xyz:9001 | 2025-06-03 16:06:32.104975 | orchestrator | | d1620b72ae324b65832315fdbedaac07 | RegionOne | swift | object-store | True | public | https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2025-06-03 16:06:32.104992 | orchestrator | | dd93c04ee29c4f6a892b4cfacfcd2271 | RegionOne | glance | image | True | internal | https://api-int.testbed.osism.xyz:9292 | 2025-06-03 16:06:32.105008 | orchestrator | | f3be5b60c9e145ada274c18c8f601f00 | RegionOne | placement | placement | True | internal | https://api-int.testbed.osism.xyz:8780 | 2025-06-03 16:06:32.105025 | orchestrator | | fb0b06ff26a84bc69e6bdaf257cf0d30 | RegionOne | nova | compute | True | public | https://api.testbed.osism.xyz:8774/v2.1 | 2025-06-03 16:06:32.105060 | orchestrator | | fbec41635da04b32b46e7dba0cce56fc | RegionOne | barbican | key-manager | True | public | https://api.testbed.osism.xyz:9311 | 2025-06-03 16:06:32.105127 | orchestrator | | fc9b38f45b2f432e83ffd6fdb1b9b00e | RegionOne | neutron | network | True | public | https://api.testbed.osism.xyz:9696 | 2025-06-03 16:06:32.105148 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2025-06-03 16:06:32.365602 | orchestrator | 2025-06-03 16:06:32.365692 | orchestrator | # Cinder 2025-06-03 16:06:32.365704 | orchestrator | 2025-06-03 16:06:32.365716 | orchestrator | + echo 2025-06-03 16:06:32.365726 | orchestrator | + echo '# Cinder' 2025-06-03 16:06:32.365736 | orchestrator | + echo 2025-06-03 16:06:32.365747 | orchestrator | + openstack volume service list 2025-06-03 16:06:35.055170 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2025-06-03 16:06:35.055243 | orchestrator | | Binary | Host | Zone | Status | State | Updated At | 2025-06-03 16:06:35.055249 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2025-06-03 16:06:35.055254 | orchestrator | | cinder-scheduler | testbed-node-0 | internal | enabled | up | 2025-06-03T16:06:28.000000 | 2025-06-03 16:06:35.055258 | orchestrator | | cinder-scheduler | testbed-node-2 | internal | enabled | up | 2025-06-03T16:06:29.000000 | 2025-06-03 16:06:35.055262 | orchestrator | | cinder-scheduler | testbed-node-1 | internal | enabled | up | 2025-06-03T16:06:29.000000 | 2025-06-03 16:06:35.055266 | orchestrator | | cinder-volume | testbed-node-3@rbd-volumes | nova | enabled | up | 2025-06-03T16:06:28.000000 | 2025-06-03 16:06:35.055270 | orchestrator | | cinder-volume | testbed-node-4@rbd-volumes | nova | enabled | up | 2025-06-03T16:06:28.000000 | 2025-06-03 16:06:35.055274 | orchestrator | | cinder-volume | testbed-node-5@rbd-volumes | nova | enabled | up | 2025-06-03T16:06:29.000000 | 2025-06-03 16:06:35.055296 | orchestrator | | cinder-backup | testbed-node-4 | nova | enabled | up | 2025-06-03T16:06:28.000000 | 2025-06-03 16:06:35.055300 | orchestrator | | cinder-backup | testbed-node-3 | nova | enabled | up | 2025-06-03T16:06:29.000000 | 2025-06-03 16:06:35.055303 | orchestrator | | cinder-backup | testbed-node-5 | nova | enabled | up | 2025-06-03T16:06:28.000000 | 2025-06-03 16:06:35.055307 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2025-06-03 16:06:35.312469 | orchestrator | 2025-06-03 16:06:35.312562 | orchestrator | # Neutron 2025-06-03 16:06:35.312573 | orchestrator | 2025-06-03 16:06:35.312580 | orchestrator | + echo 2025-06-03 16:06:35.312587 | orchestrator | + echo '# Neutron' 2025-06-03 16:06:35.312607 | orchestrator | + echo 2025-06-03 16:06:35.312620 | orchestrator | + openstack network agent list 2025-06-03 16:06:38.466482 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2025-06-03 16:06:38.466586 | orchestrator | | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | 2025-06-03 16:06:38.466601 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2025-06-03 16:06:38.466613 | orchestrator | | testbed-node-4 | OVN Controller agent | testbed-node-4 | | :-) | UP | ovn-controller | 2025-06-03 16:06:38.466624 | orchestrator | | testbed-node-5 | OVN Controller agent | testbed-node-5 | | :-) | UP | ovn-controller | 2025-06-03 16:06:38.466635 | orchestrator | | testbed-node-0 | OVN Controller Gateway agent | testbed-node-0 | nova | :-) | UP | ovn-controller | 2025-06-03 16:06:38.466646 | orchestrator | | testbed-node-2 | OVN Controller Gateway agent | testbed-node-2 | nova | :-) | UP | ovn-controller | 2025-06-03 16:06:38.466657 | orchestrator | | testbed-node-3 | OVN Controller agent | testbed-node-3 | | :-) | UP | ovn-controller | 2025-06-03 16:06:38.466668 | orchestrator | | testbed-node-1 | OVN Controller Gateway agent | testbed-node-1 | nova | :-) | UP | ovn-controller | 2025-06-03 16:06:38.466679 | orchestrator | | 36b9d21c-9928-5c0a-9b27-73ac7a3e770c | OVN Metadata agent | testbed-node-5 | | :-) | UP | neutron-ovn-metadata-agent | 2025-06-03 16:06:38.466691 | orchestrator | | e645415a-98f5-5758-8cd1-c47af282b5c0 | OVN Metadata agent | testbed-node-3 | | :-) | UP | neutron-ovn-metadata-agent | 2025-06-03 16:06:38.466702 | orchestrator | | 4939696e-6092-5a33-bb73-b850064684df | OVN Metadata agent | testbed-node-4 | | :-) | UP | neutron-ovn-metadata-agent | 2025-06-03 16:06:38.466713 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2025-06-03 16:06:38.734335 | orchestrator | + openstack network service provider list 2025-06-03 16:06:41.370933 | orchestrator | +---------------+------+---------+ 2025-06-03 16:06:41.371026 | orchestrator | | Service Type | Name | Default | 2025-06-03 16:06:41.371039 | orchestrator | +---------------+------+---------+ 2025-06-03 16:06:41.371048 | orchestrator | | L3_ROUTER_NAT | ovn | True | 2025-06-03 16:06:41.371057 | orchestrator | +---------------+------+---------+ 2025-06-03 16:06:41.664551 | orchestrator | 2025-06-03 16:06:41.664640 | orchestrator | # Nova 2025-06-03 16:06:41.664652 | orchestrator | 2025-06-03 16:06:41.664661 | orchestrator | + echo 2025-06-03 16:06:41.664670 | orchestrator | + echo '# Nova' 2025-06-03 16:06:41.664679 | orchestrator | + echo 2025-06-03 16:06:41.664688 | orchestrator | + openstack compute service list 2025-06-03 16:06:44.747482 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2025-06-03 16:06:44.747568 | orchestrator | | ID | Binary | Host | Zone | Status | State | Updated At | 2025-06-03 16:06:44.747595 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2025-06-03 16:06:44.747599 | orchestrator | | 2330ec73-cba8-43e0-a4ab-8ecb4f0998c4 | nova-scheduler | testbed-node-0 | internal | enabled | up | 2025-06-03T16:06:42.000000 | 2025-06-03 16:06:44.747603 | orchestrator | | 8765e3f6-4b6d-4a42-8646-d001d46fce4a | nova-scheduler | testbed-node-2 | internal | enabled | up | 2025-06-03T16:06:40.000000 | 2025-06-03 16:06:44.747607 | orchestrator | | 752733d6-c5c9-4091-8c73-3a457a5d761c | nova-scheduler | testbed-node-1 | internal | enabled | up | 2025-06-03T16:06:42.000000 | 2025-06-03 16:06:44.747611 | orchestrator | | b3ae7baf-74f4-408f-bf28-84faa9eaafb8 | nova-conductor | testbed-node-0 | internal | enabled | up | 2025-06-03T16:06:43.000000 | 2025-06-03 16:06:44.747615 | orchestrator | | e60f3575-6a67-4ed0-b022-8a3b93d4c429 | nova-conductor | testbed-node-2 | internal | enabled | up | 2025-06-03T16:06:36.000000 | 2025-06-03 16:06:44.747619 | orchestrator | | e912076d-3780-454f-928b-d6c563e2a28a | nova-conductor | testbed-node-1 | internal | enabled | up | 2025-06-03T16:06:37.000000 | 2025-06-03 16:06:44.747622 | orchestrator | | 8e9cf986-cefd-43f9-8657-a281d8f358ee | nova-compute | testbed-node-5 | nova | enabled | up | 2025-06-03T16:06:37.000000 | 2025-06-03 16:06:44.747626 | orchestrator | | 146061a1-eab8-4421-9b21-02541cbb197f | nova-compute | testbed-node-4 | nova | enabled | up | 2025-06-03T16:06:38.000000 | 2025-06-03 16:06:44.747630 | orchestrator | | c59db517-73ec-46f4-9346-05c6008f51a0 | nova-compute | testbed-node-3 | nova | enabled | up | 2025-06-03T16:06:38.000000 | 2025-06-03 16:06:44.747634 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2025-06-03 16:06:45.047721 | orchestrator | + openstack hypervisor list 2025-06-03 16:06:49.374728 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2025-06-03 16:06:49.374839 | orchestrator | | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State | 2025-06-03 16:06:49.374855 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2025-06-03 16:06:49.374867 | orchestrator | | e941a93d-8fec-4f19-b712-a652ef4fe68a | testbed-node-5 | QEMU | 192.168.16.15 | up | 2025-06-03 16:06:49.374878 | orchestrator | | 6d295dcb-867d-4ad9-82eb-4f48629cd239 | testbed-node-4 | QEMU | 192.168.16.14 | up | 2025-06-03 16:06:49.374889 | orchestrator | | e9bbbb24-fccb-4bbe-973e-bc24615f1ed9 | testbed-node-3 | QEMU | 192.168.16.13 | up | 2025-06-03 16:06:49.374900 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2025-06-03 16:06:49.661321 | orchestrator | 2025-06-03 16:06:49.661407 | orchestrator | # Run OpenStack test play 2025-06-03 16:06:49.661418 | orchestrator | 2025-06-03 16:06:49.661425 | orchestrator | + echo 2025-06-03 16:06:49.661433 | orchestrator | + echo '# Run OpenStack test play' 2025-06-03 16:06:49.661441 | orchestrator | + echo 2025-06-03 16:06:49.661449 | orchestrator | + osism apply --environment openstack test 2025-06-03 16:06:51.331373 | orchestrator | 2025-06-03 16:06:51 | INFO  | Trying to run play test in environment openstack 2025-06-03 16:06:51.336100 | orchestrator | Registering Redlock._acquired_script 2025-06-03 16:06:51.336229 | orchestrator | Registering Redlock._extend_script 2025-06-03 16:06:51.336245 | orchestrator | Registering Redlock._release_script 2025-06-03 16:06:51.395247 | orchestrator | 2025-06-03 16:06:51 | INFO  | Task c67e8744-cd62-472b-9c82-de6c1807e8ab (test) was prepared for execution. 2025-06-03 16:06:51.395328 | orchestrator | 2025-06-03 16:06:51 | INFO  | It takes a moment until task c67e8744-cd62-472b-9c82-de6c1807e8ab (test) has been started and output is visible here. 2025-06-03 16:06:55.374375 | orchestrator | 2025-06-03 16:06:55.374783 | orchestrator | PLAY [Create test project] ***************************************************** 2025-06-03 16:06:55.375047 | orchestrator | 2025-06-03 16:06:55.375375 | orchestrator | TASK [Create test domain] ****************************************************** 2025-06-03 16:06:55.376503 | orchestrator | Tuesday 03 June 2025 16:06:55 +0000 (0:00:00.082) 0:00:00.082 ********** 2025-06-03 16:06:58.967179 | orchestrator | changed: [localhost] 2025-06-03 16:06:58.967637 | orchestrator | 2025-06-03 16:06:58.969250 | orchestrator | TASK [Create test-admin user] ************************************************** 2025-06-03 16:06:58.970168 | orchestrator | Tuesday 03 June 2025 16:06:58 +0000 (0:00:03.589) 0:00:03.671 ********** 2025-06-03 16:07:03.467703 | orchestrator | changed: [localhost] 2025-06-03 16:07:03.468531 | orchestrator | 2025-06-03 16:07:03.470090 | orchestrator | TASK [Add manager role to user test-admin] ************************************* 2025-06-03 16:07:03.470329 | orchestrator | Tuesday 03 June 2025 16:07:03 +0000 (0:00:04.506) 0:00:08.178 ********** 2025-06-03 16:07:09.447647 | orchestrator | changed: [localhost] 2025-06-03 16:07:09.448516 | orchestrator | 2025-06-03 16:07:09.448979 | orchestrator | TASK [Create test project] ***************************************************** 2025-06-03 16:07:09.449947 | orchestrator | Tuesday 03 June 2025 16:07:09 +0000 (0:00:05.980) 0:00:14.159 ********** 2025-06-03 16:07:13.485343 | orchestrator | changed: [localhost] 2025-06-03 16:07:13.485917 | orchestrator | 2025-06-03 16:07:13.486381 | orchestrator | TASK [Create test user] ******************************************************** 2025-06-03 16:07:13.487408 | orchestrator | Tuesday 03 June 2025 16:07:13 +0000 (0:00:04.038) 0:00:18.197 ********** 2025-06-03 16:07:17.602902 | orchestrator | changed: [localhost] 2025-06-03 16:07:17.603105 | orchestrator | 2025-06-03 16:07:17.604428 | orchestrator | TASK [Add member roles to user test] ******************************************* 2025-06-03 16:07:17.605169 | orchestrator | Tuesday 03 June 2025 16:07:17 +0000 (0:00:04.116) 0:00:22.313 ********** 2025-06-03 16:07:29.507058 | orchestrator | changed: [localhost] => (item=load-balancer_member) 2025-06-03 16:07:29.507173 | orchestrator | changed: [localhost] => (item=member) 2025-06-03 16:07:29.507186 | orchestrator | changed: [localhost] => (item=creator) 2025-06-03 16:07:29.507193 | orchestrator | 2025-06-03 16:07:29.507200 | orchestrator | TASK [Create test server group] ************************************************ 2025-06-03 16:07:29.507207 | orchestrator | Tuesday 03 June 2025 16:07:29 +0000 (0:00:11.901) 0:00:34.215 ********** 2025-06-03 16:07:33.839253 | orchestrator | changed: [localhost] 2025-06-03 16:07:33.840343 | orchestrator | 2025-06-03 16:07:33.841315 | orchestrator | TASK [Create ssh security group] *********************************************** 2025-06-03 16:07:33.841361 | orchestrator | Tuesday 03 June 2025 16:07:33 +0000 (0:00:04.335) 0:00:38.550 ********** 2025-06-03 16:07:39.427268 | orchestrator | changed: [localhost] 2025-06-03 16:07:39.427993 | orchestrator | 2025-06-03 16:07:39.428713 | orchestrator | TASK [Add rule to ssh security group] ****************************************** 2025-06-03 16:07:39.429626 | orchestrator | Tuesday 03 June 2025 16:07:39 +0000 (0:00:05.588) 0:00:44.139 ********** 2025-06-03 16:07:43.719851 | orchestrator | changed: [localhost] 2025-06-03 16:07:43.719960 | orchestrator | 2025-06-03 16:07:43.720641 | orchestrator | TASK [Create icmp security group] ********************************************** 2025-06-03 16:07:43.721589 | orchestrator | Tuesday 03 June 2025 16:07:43 +0000 (0:00:04.291) 0:00:48.431 ********** 2025-06-03 16:07:48.106979 | orchestrator | changed: [localhost] 2025-06-03 16:07:48.107363 | orchestrator | 2025-06-03 16:07:48.108219 | orchestrator | TASK [Add rule to icmp security group] ***************************************** 2025-06-03 16:07:48.108743 | orchestrator | Tuesday 03 June 2025 16:07:48 +0000 (0:00:04.387) 0:00:52.818 ********** 2025-06-03 16:07:52.176095 | orchestrator | changed: [localhost] 2025-06-03 16:07:52.176620 | orchestrator | 2025-06-03 16:07:52.177325 | orchestrator | TASK [Create test keypair] ***************************************************** 2025-06-03 16:07:52.177587 | orchestrator | Tuesday 03 June 2025 16:07:52 +0000 (0:00:04.069) 0:00:56.887 ********** 2025-06-03 16:07:56.473615 | orchestrator | changed: [localhost] 2025-06-03 16:07:56.475696 | orchestrator | 2025-06-03 16:07:56.475783 | orchestrator | TASK [Create test network topology] ******************************************** 2025-06-03 16:07:56.475800 | orchestrator | Tuesday 03 June 2025 16:07:56 +0000 (0:00:04.294) 0:01:01.182 ********** 2025-06-03 16:08:11.191954 | orchestrator | changed: [localhost] 2025-06-03 16:08:11.192064 | orchestrator | 2025-06-03 16:08:11.192082 | orchestrator | TASK [Create test instances] *************************************************** 2025-06-03 16:08:11.192094 | orchestrator | Tuesday 03 June 2025 16:08:11 +0000 (0:00:14.717) 0:01:15.900 ********** 2025-06-03 16:10:25.231030 | orchestrator | changed: [localhost] => (item=test) 2025-06-03 16:10:25.231132 | orchestrator | changed: [localhost] => (item=test-1) 2025-06-03 16:10:25.231141 | orchestrator | changed: [localhost] => (item=test-2) 2025-06-03 16:10:25.231148 | orchestrator | 2025-06-03 16:10:25.231156 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2025-06-03 16:10:55.231231 | orchestrator | changed: [localhost] => (item=test-3) 2025-06-03 16:10:55.231380 | orchestrator | 2025-06-03 16:10:55.231484 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2025-06-03 16:11:25.232960 | orchestrator | 2025-06-03 16:11:25.233094 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2025-06-03 16:11:27.310341 | orchestrator | changed: [localhost] => (item=test-4) 2025-06-03 16:11:27.311930 | orchestrator | 2025-06-03 16:11:27.312024 | orchestrator | TASK [Add metadata to instances] *********************************************** 2025-06-03 16:11:27.313856 | orchestrator | Tuesday 03 June 2025 16:11:27 +0000 (0:03:16.121) 0:04:32.022 ********** 2025-06-03 16:11:50.332332 | orchestrator | changed: [localhost] => (item=test) 2025-06-03 16:11:50.332548 | orchestrator | changed: [localhost] => (item=test-1) 2025-06-03 16:11:50.332566 | orchestrator | changed: [localhost] => (item=test-2) 2025-06-03 16:11:50.332575 | orchestrator | changed: [localhost] => (item=test-3) 2025-06-03 16:11:50.332583 | orchestrator | changed: [localhost] => (item=test-4) 2025-06-03 16:11:50.332592 | orchestrator | 2025-06-03 16:11:50.332601 | orchestrator | TASK [Add tag to instances] **************************************************** 2025-06-03 16:11:50.332611 | orchestrator | Tuesday 03 June 2025 16:11:50 +0000 (0:00:23.014) 0:04:55.036 ********** 2025-06-03 16:12:22.640561 | orchestrator | changed: [localhost] => (item=test) 2025-06-03 16:12:22.640727 | orchestrator | changed: [localhost] => (item=test-1) 2025-06-03 16:12:22.640749 | orchestrator | changed: [localhost] => (item=test-2) 2025-06-03 16:12:22.641963 | orchestrator | changed: [localhost] => (item=test-3) 2025-06-03 16:12:22.642970 | orchestrator | changed: [localhost] => (item=test-4) 2025-06-03 16:12:22.643755 | orchestrator | 2025-06-03 16:12:22.644405 | orchestrator | TASK [Create test volume] ****************************************************** 2025-06-03 16:12:22.644882 | orchestrator | Tuesday 03 June 2025 16:12:22 +0000 (0:00:32.312) 0:05:27.349 ********** 2025-06-03 16:12:29.981799 | orchestrator | changed: [localhost] 2025-06-03 16:12:29.982375 | orchestrator | 2025-06-03 16:12:29.983231 | orchestrator | TASK [Attach test volume] ****************************************************** 2025-06-03 16:12:29.983977 | orchestrator | Tuesday 03 June 2025 16:12:29 +0000 (0:00:07.343) 0:05:34.692 ********** 2025-06-03 16:12:43.671965 | orchestrator | changed: [localhost] 2025-06-03 16:12:43.672064 | orchestrator | 2025-06-03 16:12:43.672075 | orchestrator | TASK [Create floating ip address] ********************************************** 2025-06-03 16:12:43.672083 | orchestrator | Tuesday 03 June 2025 16:12:43 +0000 (0:00:13.688) 0:05:48.381 ********** 2025-06-03 16:12:48.810137 | orchestrator | ok: [localhost] 2025-06-03 16:12:48.811632 | orchestrator | 2025-06-03 16:12:48.811687 | orchestrator | TASK [Print floating ip address] *********************************************** 2025-06-03 16:12:48.811698 | orchestrator | Tuesday 03 June 2025 16:12:48 +0000 (0:00:05.140) 0:05:53.522 ********** 2025-06-03 16:12:48.864104 | orchestrator | ok: [localhost] => { 2025-06-03 16:12:48.864682 | orchestrator |  "msg": "192.168.112.181" 2025-06-03 16:12:48.865908 | orchestrator | } 2025-06-03 16:12:48.866445 | orchestrator | 2025-06-03 16:12:48.867383 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-03 16:12:48.868026 | orchestrator | 2025-06-03 16:12:48 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-03 16:12:48.868443 | orchestrator | 2025-06-03 16:12:48 | INFO  | Please wait and do not abort execution. 2025-06-03 16:12:48.869675 | orchestrator | localhost : ok=20  changed=18  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-03 16:12:48.870934 | orchestrator | 2025-06-03 16:12:48.871420 | orchestrator | 2025-06-03 16:12:48.872632 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-03 16:12:48.873204 | orchestrator | Tuesday 03 June 2025 16:12:48 +0000 (0:00:00.054) 0:05:53.576 ********** 2025-06-03 16:12:48.874696 | orchestrator | =============================================================================== 2025-06-03 16:12:48.875143 | orchestrator | Create test instances ------------------------------------------------- 196.12s 2025-06-03 16:12:48.875749 | orchestrator | Add tag to instances --------------------------------------------------- 32.31s 2025-06-03 16:12:48.876154 | orchestrator | Add metadata to instances ---------------------------------------------- 23.01s 2025-06-03 16:12:48.876844 | orchestrator | Create test network topology ------------------------------------------- 14.72s 2025-06-03 16:12:48.877364 | orchestrator | Attach test volume ----------------------------------------------------- 13.69s 2025-06-03 16:12:48.877759 | orchestrator | Add member roles to user test ------------------------------------------ 11.90s 2025-06-03 16:12:48.878157 | orchestrator | Create test volume ------------------------------------------------------ 7.34s 2025-06-03 16:12:48.878442 | orchestrator | Add manager role to user test-admin ------------------------------------- 5.98s 2025-06-03 16:12:48.879317 | orchestrator | Create ssh security group ----------------------------------------------- 5.59s 2025-06-03 16:12:48.879391 | orchestrator | Create floating ip address ---------------------------------------------- 5.14s 2025-06-03 16:12:48.879868 | orchestrator | Create test-admin user -------------------------------------------------- 4.51s 2025-06-03 16:12:48.880126 | orchestrator | Create icmp security group ---------------------------------------------- 4.39s 2025-06-03 16:12:48.880584 | orchestrator | Create test server group ------------------------------------------------ 4.34s 2025-06-03 16:12:48.881151 | orchestrator | Create test keypair ----------------------------------------------------- 4.29s 2025-06-03 16:12:48.881712 | orchestrator | Add rule to ssh security group ------------------------------------------ 4.29s 2025-06-03 16:12:48.881736 | orchestrator | Create test user -------------------------------------------------------- 4.12s 2025-06-03 16:12:48.882114 | orchestrator | Add rule to icmp security group ----------------------------------------- 4.07s 2025-06-03 16:12:48.882506 | orchestrator | Create test project ----------------------------------------------------- 4.04s 2025-06-03 16:12:48.882937 | orchestrator | Create test domain ------------------------------------------------------ 3.59s 2025-06-03 16:12:48.884086 | orchestrator | Print floating ip address ----------------------------------------------- 0.05s 2025-06-03 16:12:49.352098 | orchestrator | + server_list 2025-06-03 16:12:49.352168 | orchestrator | + openstack --os-cloud test server list 2025-06-03 16:12:53.336778 | orchestrator | +--------------------------------------+--------+--------+----------------------------------------------------+--------------+------------+ 2025-06-03 16:12:53.336880 | orchestrator | | ID | Name | Status | Networks | Image | Flavor | 2025-06-03 16:12:53.336892 | orchestrator | +--------------------------------------+--------+--------+----------------------------------------------------+--------------+------------+ 2025-06-03 16:12:53.336902 | orchestrator | | 5e653e1e-35dd-4b03-89c6-c708eb2847c9 | test-4 | ACTIVE | auto_allocated_network=10.42.0.12, 192.168.112.164 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-06-03 16:12:53.336912 | orchestrator | | eab5b7a7-3709-466f-b8fc-a714b894dee2 | test-3 | ACTIVE | auto_allocated_network=10.42.0.16, 192.168.112.100 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-06-03 16:12:53.336922 | orchestrator | | 2052164a-2dac-4ee8-b92c-4a1cdb4ffd9e | test-2 | ACTIVE | auto_allocated_network=10.42.0.44, 192.168.112.125 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-06-03 16:12:53.336932 | orchestrator | | 04a6f903-19e5-4edd-96da-17c2b3783884 | test-1 | ACTIVE | auto_allocated_network=10.42.0.46, 192.168.112.138 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-06-03 16:12:53.336968 | orchestrator | | 3929443f-3628-4012-bde4-e0b23f5774c3 | test | ACTIVE | auto_allocated_network=10.42.0.43, 192.168.112.181 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-06-03 16:12:53.336978 | orchestrator | +--------------------------------------+--------+--------+----------------------------------------------------+--------------+------------+ 2025-06-03 16:12:53.621585 | orchestrator | + openstack --os-cloud test server show test 2025-06-03 16:12:56.926646 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-03 16:12:56.926722 | orchestrator | | Field | Value | 2025-06-03 16:12:56.926730 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-03 16:12:56.926736 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-06-03 16:12:56.926748 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-06-03 16:12:56.926754 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-06-03 16:12:56.926760 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test | 2025-06-03 16:12:56.926766 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-06-03 16:12:56.926771 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-06-03 16:12:56.926777 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-06-03 16:12:56.926796 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-06-03 16:12:56.926812 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-06-03 16:12:56.926821 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-06-03 16:12:56.926826 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-06-03 16:12:56.926832 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-06-03 16:12:56.926838 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-06-03 16:12:56.926843 | orchestrator | | OS-EXT-STS:task_state | None | 2025-06-03 16:12:56.926849 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-06-03 16:12:56.926854 | orchestrator | | OS-SRV-USG:launched_at | 2025-06-03T16:08:41.000000 | 2025-06-03 16:12:56.926860 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-06-03 16:12:56.926866 | orchestrator | | accessIPv4 | | 2025-06-03 16:12:56.926875 | orchestrator | | accessIPv6 | | 2025-06-03 16:12:56.926881 | orchestrator | | addresses | auto_allocated_network=10.42.0.43, 192.168.112.181 | 2025-06-03 16:12:56.926892 | orchestrator | | config_drive | | 2025-06-03 16:12:56.926898 | orchestrator | | created | 2025-06-03T16:08:19Z | 2025-06-03 16:12:56.926903 | orchestrator | | description | None | 2025-06-03 16:12:56.926909 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-06-03 16:12:56.926914 | orchestrator | | hostId | cccfffe71d22461d2e7ee07fe1c1f08dd463eca7a62c87bfbcdef74c | 2025-06-03 16:12:56.926920 | orchestrator | | host_status | None | 2025-06-03 16:12:56.926926 | orchestrator | | id | 3929443f-3628-4012-bde4-e0b23f5774c3 | 2025-06-03 16:12:56.926931 | orchestrator | | image | Cirros 0.6.2 (5c865c54-6510-48b9-acbb-17096acdfdda) | 2025-06-03 16:12:56.926937 | orchestrator | | key_name | test | 2025-06-03 16:12:56.926946 | orchestrator | | locked | False | 2025-06-03 16:12:56.926951 | orchestrator | | locked_reason | None | 2025-06-03 16:12:56.926957 | orchestrator | | name | test | 2025-06-03 16:12:56.926970 | orchestrator | | pinned_availability_zone | None | 2025-06-03 16:12:56.926976 | orchestrator | | progress | 0 | 2025-06-03 16:12:56.926981 | orchestrator | | project_id | 0fdbe59f694845039e3631b2c5347356 | 2025-06-03 16:12:56.926987 | orchestrator | | properties | hostname='test' | 2025-06-03 16:12:56.926993 | orchestrator | | security_groups | name='ssh' | 2025-06-03 16:12:56.926998 | orchestrator | | | name='icmp' | 2025-06-03 16:12:56.927004 | orchestrator | | server_groups | None | 2025-06-03 16:12:56.927015 | orchestrator | | status | ACTIVE | 2025-06-03 16:12:56.927020 | orchestrator | | tags | test | 2025-06-03 16:12:56.927026 | orchestrator | | trusted_image_certificates | None | 2025-06-03 16:12:56.927032 | orchestrator | | updated | 2025-06-03T16:11:32Z | 2025-06-03 16:12:56.927040 | orchestrator | | user_id | 9962b5cd3530483eb88b6a8ed0135ff0 | 2025-06-03 16:12:56.927047 | orchestrator | | volumes_attached | delete_on_termination='False', id='ec898f96-aaec-4478-986e-6d6e609a55cf' | 2025-06-03 16:12:56.931148 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-03 16:12:57.182427 | orchestrator | + openstack --os-cloud test server show test-1 2025-06-03 16:13:00.382914 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-03 16:13:00.383016 | orchestrator | | Field | Value | 2025-06-03 16:13:00.383027 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-03 16:13:00.383032 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-06-03 16:13:00.383079 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-06-03 16:13:00.383084 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-06-03 16:13:00.383088 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-1 | 2025-06-03 16:13:00.383092 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-06-03 16:13:00.383096 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-06-03 16:13:00.383100 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-06-03 16:13:00.383104 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-06-03 16:13:00.383119 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-06-03 16:13:00.383135 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-06-03 16:13:00.383140 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-06-03 16:13:00.383144 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-06-03 16:13:00.383152 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-06-03 16:13:00.383156 | orchestrator | | OS-EXT-STS:task_state | None | 2025-06-03 16:13:00.383160 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-06-03 16:13:00.383164 | orchestrator | | OS-SRV-USG:launched_at | 2025-06-03T16:09:20.000000 | 2025-06-03 16:13:00.383168 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-06-03 16:13:00.383171 | orchestrator | | accessIPv4 | | 2025-06-03 16:13:00.383178 | orchestrator | | accessIPv6 | | 2025-06-03 16:13:00.383182 | orchestrator | | addresses | auto_allocated_network=10.42.0.46, 192.168.112.138 | 2025-06-03 16:13:00.383189 | orchestrator | | config_drive | | 2025-06-03 16:13:00.383193 | orchestrator | | created | 2025-06-03T16:08:57Z | 2025-06-03 16:13:00.383206 | orchestrator | | description | None | 2025-06-03 16:13:00.383210 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-06-03 16:13:00.383214 | orchestrator | | hostId | 94ac833d88af105381b13ee49f7e082273b4f25f32d38439d1f62e8b | 2025-06-03 16:13:00.383218 | orchestrator | | host_status | None | 2025-06-03 16:13:00.383222 | orchestrator | | id | 04a6f903-19e5-4edd-96da-17c2b3783884 | 2025-06-03 16:13:00.383225 | orchestrator | | image | Cirros 0.6.2 (5c865c54-6510-48b9-acbb-17096acdfdda) | 2025-06-03 16:13:00.383229 | orchestrator | | key_name | test | 2025-06-03 16:13:00.383236 | orchestrator | | locked | False | 2025-06-03 16:13:00.383240 | orchestrator | | locked_reason | None | 2025-06-03 16:13:00.383243 | orchestrator | | name | test-1 | 2025-06-03 16:13:00.383250 | orchestrator | | pinned_availability_zone | None | 2025-06-03 16:13:00.383257 | orchestrator | | progress | 0 | 2025-06-03 16:13:00.383262 | orchestrator | | project_id | 0fdbe59f694845039e3631b2c5347356 | 2025-06-03 16:13:00.383269 | orchestrator | | properties | hostname='test-1' | 2025-06-03 16:13:00.383279 | orchestrator | | security_groups | name='ssh' | 2025-06-03 16:13:00.383286 | orchestrator | | | name='icmp' | 2025-06-03 16:13:00.383292 | orchestrator | | server_groups | None | 2025-06-03 16:13:00.383298 | orchestrator | | status | ACTIVE | 2025-06-03 16:13:00.383304 | orchestrator | | tags | test | 2025-06-03 16:13:00.383314 | orchestrator | | trusted_image_certificates | None | 2025-06-03 16:13:00.383320 | orchestrator | | updated | 2025-06-03T16:11:36Z | 2025-06-03 16:13:00.383330 | orchestrator | | user_id | 9962b5cd3530483eb88b6a8ed0135ff0 | 2025-06-03 16:13:00.383342 | orchestrator | | volumes_attached | | 2025-06-03 16:13:00.387879 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-03 16:13:00.670480 | orchestrator | + openstack --os-cloud test server show test-2 2025-06-03 16:13:04.090737 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-03 16:13:04.090819 | orchestrator | | Field | Value | 2025-06-03 16:13:04.090826 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-03 16:13:04.090831 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-06-03 16:13:04.090835 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-06-03 16:13:04.090839 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-06-03 16:13:04.090843 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-2 | 2025-06-03 16:13:04.090860 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-06-03 16:13:04.090864 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-06-03 16:13:04.090884 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-06-03 16:13:04.090888 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-06-03 16:13:04.090904 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-06-03 16:13:04.090908 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-06-03 16:13:04.090912 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-06-03 16:13:04.090916 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-06-03 16:13:04.090920 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-06-03 16:13:04.090924 | orchestrator | | OS-EXT-STS:task_state | None | 2025-06-03 16:13:04.090928 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-06-03 16:13:04.090932 | orchestrator | | OS-SRV-USG:launched_at | 2025-06-03T16:10:04.000000 | 2025-06-03 16:13:04.090944 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-06-03 16:13:04.090948 | orchestrator | | accessIPv4 | | 2025-06-03 16:13:04.090952 | orchestrator | | accessIPv6 | | 2025-06-03 16:13:04.090956 | orchestrator | | addresses | auto_allocated_network=10.42.0.44, 192.168.112.125 | 2025-06-03 16:13:04.090962 | orchestrator | | config_drive | | 2025-06-03 16:13:04.090966 | orchestrator | | created | 2025-06-03T16:09:42Z | 2025-06-03 16:13:04.090970 | orchestrator | | description | None | 2025-06-03 16:13:04.090974 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-06-03 16:13:04.090983 | orchestrator | | hostId | 99f1ed32b758497fb3d46bc5975302f95599cbb29ca24c22a6551266 | 2025-06-03 16:13:04.090987 | orchestrator | | host_status | None | 2025-06-03 16:13:04.090991 | orchestrator | | id | 2052164a-2dac-4ee8-b92c-4a1cdb4ffd9e | 2025-06-03 16:13:04.091000 | orchestrator | | image | Cirros 0.6.2 (5c865c54-6510-48b9-acbb-17096acdfdda) | 2025-06-03 16:13:04.091004 | orchestrator | | key_name | test | 2025-06-03 16:13:04.091008 | orchestrator | | locked | False | 2025-06-03 16:13:04.091012 | orchestrator | | locked_reason | None | 2025-06-03 16:13:04.091016 | orchestrator | | name | test-2 | 2025-06-03 16:13:04.091022 | orchestrator | | pinned_availability_zone | None | 2025-06-03 16:13:04.091026 | orchestrator | | progress | 0 | 2025-06-03 16:13:04.091030 | orchestrator | | project_id | 0fdbe59f694845039e3631b2c5347356 | 2025-06-03 16:13:04.091034 | orchestrator | | properties | hostname='test-2' | 2025-06-03 16:13:04.091038 | orchestrator | | security_groups | name='ssh' | 2025-06-03 16:13:04.091042 | orchestrator | | | name='icmp' | 2025-06-03 16:13:04.091049 | orchestrator | | server_groups | None | 2025-06-03 16:13:04.091056 | orchestrator | | status | ACTIVE | 2025-06-03 16:13:04.091060 | orchestrator | | tags | test | 2025-06-03 16:13:04.091064 | orchestrator | | trusted_image_certificates | None | 2025-06-03 16:13:04.091068 | orchestrator | | updated | 2025-06-03T16:11:41Z | 2025-06-03 16:13:04.091074 | orchestrator | | user_id | 9962b5cd3530483eb88b6a8ed0135ff0 | 2025-06-03 16:13:04.091078 | orchestrator | | volumes_attached | | 2025-06-03 16:13:04.096958 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-03 16:13:04.494671 | orchestrator | + openstack --os-cloud test server show test-3 2025-06-03 16:13:07.851056 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-03 16:13:07.851160 | orchestrator | | Field | Value | 2025-06-03 16:13:07.851174 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-03 16:13:07.851213 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-06-03 16:13:07.851239 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-06-03 16:13:07.851251 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-06-03 16:13:07.851262 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-3 | 2025-06-03 16:13:07.851273 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-06-03 16:13:07.851285 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-06-03 16:13:07.851296 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-06-03 16:13:07.851307 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-06-03 16:13:07.851334 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-06-03 16:13:07.851346 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-06-03 16:13:07.851365 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-06-03 16:13:07.851376 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-06-03 16:13:07.851387 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-06-03 16:13:07.851403 | orchestrator | | OS-EXT-STS:task_state | None | 2025-06-03 16:13:07.851414 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-06-03 16:13:07.851426 | orchestrator | | OS-SRV-USG:launched_at | 2025-06-03T16:10:37.000000 | 2025-06-03 16:13:07.851437 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-06-03 16:13:07.851457 | orchestrator | | accessIPv4 | | 2025-06-03 16:13:07.851475 | orchestrator | | accessIPv6 | | 2025-06-03 16:13:07.851562 | orchestrator | | addresses | auto_allocated_network=10.42.0.16, 192.168.112.100 | 2025-06-03 16:13:07.851590 | orchestrator | | config_drive | | 2025-06-03 16:13:07.851621 | orchestrator | | created | 2025-06-03T16:10:21Z | 2025-06-03 16:13:07.851640 | orchestrator | | description | None | 2025-06-03 16:13:07.851658 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-06-03 16:13:07.851676 | orchestrator | | hostId | 94ac833d88af105381b13ee49f7e082273b4f25f32d38439d1f62e8b | 2025-06-03 16:13:07.851701 | orchestrator | | host_status | None | 2025-06-03 16:13:07.851720 | orchestrator | | id | eab5b7a7-3709-466f-b8fc-a714b894dee2 | 2025-06-03 16:13:07.851739 | orchestrator | | image | Cirros 0.6.2 (5c865c54-6510-48b9-acbb-17096acdfdda) | 2025-06-03 16:13:07.851757 | orchestrator | | key_name | test | 2025-06-03 16:13:07.851775 | orchestrator | | locked | False | 2025-06-03 16:13:07.851794 | orchestrator | | locked_reason | None | 2025-06-03 16:13:07.851812 | orchestrator | | name | test-3 | 2025-06-03 16:13:07.851851 | orchestrator | | pinned_availability_zone | None | 2025-06-03 16:13:07.851871 | orchestrator | | progress | 0 | 2025-06-03 16:13:07.851890 | orchestrator | | project_id | 0fdbe59f694845039e3631b2c5347356 | 2025-06-03 16:13:07.851908 | orchestrator | | properties | hostname='test-3' | 2025-06-03 16:13:07.851928 | orchestrator | | security_groups | name='ssh' | 2025-06-03 16:13:07.851945 | orchestrator | | | name='icmp' | 2025-06-03 16:13:07.851964 | orchestrator | | server_groups | None | 2025-06-03 16:13:07.851982 | orchestrator | | status | ACTIVE | 2025-06-03 16:13:07.852000 | orchestrator | | tags | test | 2025-06-03 16:13:07.852018 | orchestrator | | trusted_image_certificates | None | 2025-06-03 16:13:07.852039 | orchestrator | | updated | 2025-06-03T16:11:45Z | 2025-06-03 16:13:07.852076 | orchestrator | | user_id | 9962b5cd3530483eb88b6a8ed0135ff0 | 2025-06-03 16:13:07.852097 | orchestrator | | volumes_attached | | 2025-06-03 16:13:07.856341 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-03 16:13:08.275745 | orchestrator | + openstack --os-cloud test server show test-4 2025-06-03 16:13:11.738899 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-03 16:13:11.739006 | orchestrator | | Field | Value | 2025-06-03 16:13:11.739020 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-03 16:13:11.739027 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-06-03 16:13:11.739034 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-06-03 16:13:11.739040 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-06-03 16:13:11.739046 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-4 | 2025-06-03 16:13:11.739074 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-06-03 16:13:11.739081 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-06-03 16:13:11.739087 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-06-03 16:13:11.739093 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-06-03 16:13:11.739111 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-06-03 16:13:11.739118 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-06-03 16:13:11.739128 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-06-03 16:13:11.739135 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-06-03 16:13:11.739142 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-06-03 16:13:11.739148 | orchestrator | | OS-EXT-STS:task_state | None | 2025-06-03 16:13:11.739154 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-06-03 16:13:11.739166 | orchestrator | | OS-SRV-USG:launched_at | 2025-06-03T16:11:11.000000 | 2025-06-03 16:13:11.739173 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-06-03 16:13:11.739179 | orchestrator | | accessIPv4 | | 2025-06-03 16:13:11.739184 | orchestrator | | accessIPv6 | | 2025-06-03 16:13:11.739191 | orchestrator | | addresses | auto_allocated_network=10.42.0.12, 192.168.112.164 | 2025-06-03 16:13:11.739201 | orchestrator | | config_drive | | 2025-06-03 16:13:11.739208 | orchestrator | | created | 2025-06-03T16:10:54Z | 2025-06-03 16:13:11.739218 | orchestrator | | description | None | 2025-06-03 16:13:11.739224 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-06-03 16:13:11.739231 | orchestrator | | hostId | 99f1ed32b758497fb3d46bc5975302f95599cbb29ca24c22a6551266 | 2025-06-03 16:13:11.739237 | orchestrator | | host_status | None | 2025-06-03 16:13:11.739248 | orchestrator | | id | 5e653e1e-35dd-4b03-89c6-c708eb2847c9 | 2025-06-03 16:13:11.739254 | orchestrator | | image | Cirros 0.6.2 (5c865c54-6510-48b9-acbb-17096acdfdda) | 2025-06-03 16:13:11.739261 | orchestrator | | key_name | test | 2025-06-03 16:13:11.739268 | orchestrator | | locked | False | 2025-06-03 16:13:11.739274 | orchestrator | | locked_reason | None | 2025-06-03 16:13:11.739280 | orchestrator | | name | test-4 | 2025-06-03 16:13:11.739290 | orchestrator | | pinned_availability_zone | None | 2025-06-03 16:13:11.739296 | orchestrator | | progress | 0 | 2025-06-03 16:13:11.739306 | orchestrator | | project_id | 0fdbe59f694845039e3631b2c5347356 | 2025-06-03 16:13:11.739312 | orchestrator | | properties | hostname='test-4' | 2025-06-03 16:13:11.739317 | orchestrator | | security_groups | name='ssh' | 2025-06-03 16:13:11.739327 | orchestrator | | | name='icmp' | 2025-06-03 16:13:11.739333 | orchestrator | | server_groups | None | 2025-06-03 16:13:11.739339 | orchestrator | | status | ACTIVE | 2025-06-03 16:13:11.739344 | orchestrator | | tags | test | 2025-06-03 16:13:11.739350 | orchestrator | | trusted_image_certificates | None | 2025-06-03 16:13:11.739356 | orchestrator | | updated | 2025-06-03T16:11:50Z | 2025-06-03 16:13:11.739364 | orchestrator | | user_id | 9962b5cd3530483eb88b6a8ed0135ff0 | 2025-06-03 16:13:11.739370 | orchestrator | | volumes_attached | | 2025-06-03 16:13:11.745066 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-03 16:13:12.206373 | orchestrator | + server_ping 2025-06-03 16:13:12.207788 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2025-06-03 16:13:12.208319 | orchestrator | ++ tr -d '\r' 2025-06-03 16:13:15.043741 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-03 16:13:15.043877 | orchestrator | + ping -c3 192.168.112.125 2025-06-03 16:13:15.054077 | orchestrator | PING 192.168.112.125 (192.168.112.125) 56(84) bytes of data. 2025-06-03 16:13:15.054166 | orchestrator | 64 bytes from 192.168.112.125: icmp_seq=1 ttl=63 time=5.40 ms 2025-06-03 16:13:16.052809 | orchestrator | 64 bytes from 192.168.112.125: icmp_seq=2 ttl=63 time=2.48 ms 2025-06-03 16:13:17.053985 | orchestrator | 64 bytes from 192.168.112.125: icmp_seq=3 ttl=63 time=1.45 ms 2025-06-03 16:13:17.054072 | orchestrator | 2025-06-03 16:13:17.054080 | orchestrator | --- 192.168.112.125 ping statistics --- 2025-06-03 16:13:17.054088 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-06-03 16:13:17.054095 | orchestrator | rtt min/avg/max/mdev = 1.448/3.110/5.402/1.674 ms 2025-06-03 16:13:17.054187 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-03 16:13:17.054198 | orchestrator | + ping -c3 192.168.112.164 2025-06-03 16:13:17.064544 | orchestrator | PING 192.168.112.164 (192.168.112.164) 56(84) bytes of data. 2025-06-03 16:13:17.064658 | orchestrator | 64 bytes from 192.168.112.164: icmp_seq=1 ttl=63 time=5.37 ms 2025-06-03 16:13:18.061698 | orchestrator | 64 bytes from 192.168.112.164: icmp_seq=2 ttl=63 time=1.91 ms 2025-06-03 16:13:19.062863 | orchestrator | 64 bytes from 192.168.112.164: icmp_seq=3 ttl=63 time=1.82 ms 2025-06-03 16:13:19.062958 | orchestrator | 2025-06-03 16:13:19.062973 | orchestrator | --- 192.168.112.164 ping statistics --- 2025-06-03 16:13:19.062986 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-06-03 16:13:19.062998 | orchestrator | rtt min/avg/max/mdev = 1.819/3.029/5.365/1.651 ms 2025-06-03 16:13:19.063011 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-03 16:13:19.063381 | orchestrator | + ping -c3 192.168.112.138 2025-06-03 16:13:19.075370 | orchestrator | PING 192.168.112.138 (192.168.112.138) 56(84) bytes of data. 2025-06-03 16:13:19.075447 | orchestrator | 64 bytes from 192.168.112.138: icmp_seq=1 ttl=63 time=7.19 ms 2025-06-03 16:13:20.072209 | orchestrator | 64 bytes from 192.168.112.138: icmp_seq=2 ttl=63 time=2.53 ms 2025-06-03 16:13:21.073541 | orchestrator | 64 bytes from 192.168.112.138: icmp_seq=3 ttl=63 time=1.78 ms 2025-06-03 16:13:21.073624 | orchestrator | 2025-06-03 16:13:21.073633 | orchestrator | --- 192.168.112.138 ping statistics --- 2025-06-03 16:13:21.073641 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-06-03 16:13:21.073648 | orchestrator | rtt min/avg/max/mdev = 1.782/3.835/7.191/2.392 ms 2025-06-03 16:13:21.073995 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-03 16:13:21.074006 | orchestrator | + ping -c3 192.168.112.181 2025-06-03 16:13:21.084590 | orchestrator | PING 192.168.112.181 (192.168.112.181) 56(84) bytes of data. 2025-06-03 16:13:21.084667 | orchestrator | 64 bytes from 192.168.112.181: icmp_seq=1 ttl=63 time=5.84 ms 2025-06-03 16:13:22.085642 | orchestrator | 64 bytes from 192.168.112.181: icmp_seq=2 ttl=63 time=2.64 ms 2025-06-03 16:13:23.084202 | orchestrator | 64 bytes from 192.168.112.181: icmp_seq=3 ttl=63 time=1.78 ms 2025-06-03 16:13:23.084291 | orchestrator | 2025-06-03 16:13:23.084301 | orchestrator | --- 192.168.112.181 ping statistics --- 2025-06-03 16:13:23.084309 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-06-03 16:13:23.084316 | orchestrator | rtt min/avg/max/mdev = 1.777/3.419/5.838/1.746 ms 2025-06-03 16:13:23.084324 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-03 16:13:23.084332 | orchestrator | + ping -c3 192.168.112.100 2025-06-03 16:13:23.096662 | orchestrator | PING 192.168.112.100 (192.168.112.100) 56(84) bytes of data. 2025-06-03 16:13:23.096762 | orchestrator | 64 bytes from 192.168.112.100: icmp_seq=1 ttl=63 time=7.20 ms 2025-06-03 16:13:24.093669 | orchestrator | 64 bytes from 192.168.112.100: icmp_seq=2 ttl=63 time=2.75 ms 2025-06-03 16:13:25.094900 | orchestrator | 64 bytes from 192.168.112.100: icmp_seq=3 ttl=63 time=1.78 ms 2025-06-03 16:13:25.094996 | orchestrator | 2025-06-03 16:13:25.095008 | orchestrator | --- 192.168.112.100 ping statistics --- 2025-06-03 16:13:25.095019 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2025-06-03 16:13:25.095028 | orchestrator | rtt min/avg/max/mdev = 1.784/3.912/7.199/2.357 ms 2025-06-03 16:13:25.095761 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-06-03 16:13:25.095840 | orchestrator | + compute_list 2025-06-03 16:13:25.095856 | orchestrator | + osism manage compute list testbed-node-3 2025-06-03 16:13:28.384036 | orchestrator | +--------------------------------------+--------+----------+ 2025-06-03 16:13:28.384123 | orchestrator | | ID | Name | Status | 2025-06-03 16:13:28.384132 | orchestrator | |--------------------------------------+--------+----------| 2025-06-03 16:13:28.384139 | orchestrator | | 5e653e1e-35dd-4b03-89c6-c708eb2847c9 | test-4 | ACTIVE | 2025-06-03 16:13:28.384146 | orchestrator | | 2052164a-2dac-4ee8-b92c-4a1cdb4ffd9e | test-2 | ACTIVE | 2025-06-03 16:13:28.384152 | orchestrator | +--------------------------------------+--------+----------+ 2025-06-03 16:13:28.625051 | orchestrator | + osism manage compute list testbed-node-4 2025-06-03 16:13:31.660844 | orchestrator | +--------------------------------------+--------+----------+ 2025-06-03 16:13:31.661006 | orchestrator | | ID | Name | Status | 2025-06-03 16:13:31.661036 | orchestrator | |--------------------------------------+--------+----------| 2025-06-03 16:13:31.661058 | orchestrator | | 3929443f-3628-4012-bde4-e0b23f5774c3 | test | ACTIVE | 2025-06-03 16:13:31.662086 | orchestrator | +--------------------------------------+--------+----------+ 2025-06-03 16:13:31.896168 | orchestrator | + osism manage compute list testbed-node-5 2025-06-03 16:13:35.135468 | orchestrator | +--------------------------------------+--------+----------+ 2025-06-03 16:13:35.135629 | orchestrator | | ID | Name | Status | 2025-06-03 16:13:35.135644 | orchestrator | |--------------------------------------+--------+----------| 2025-06-03 16:13:35.135656 | orchestrator | | eab5b7a7-3709-466f-b8fc-a714b894dee2 | test-3 | ACTIVE | 2025-06-03 16:13:35.135668 | orchestrator | | 04a6f903-19e5-4edd-96da-17c2b3783884 | test-1 | ACTIVE | 2025-06-03 16:13:35.135700 | orchestrator | +--------------------------------------+--------+----------+ 2025-06-03 16:13:35.379640 | orchestrator | + osism manage compute migrate --yes --target testbed-node-3 testbed-node-4 2025-06-03 16:13:38.486487 | orchestrator | 2025-06-03 16:13:38 | INFO  | Live migrating server 3929443f-3628-4012-bde4-e0b23f5774c3 2025-06-03 16:13:51.306899 | orchestrator | 2025-06-03 16:13:51 | INFO  | Live migration of 3929443f-3628-4012-bde4-e0b23f5774c3 (test) is still in progress 2025-06-03 16:13:54.006406 | orchestrator | 2025-06-03 16:13:54 | INFO  | Live migration of 3929443f-3628-4012-bde4-e0b23f5774c3 (test) is still in progress 2025-06-03 16:13:56.551332 | orchestrator | 2025-06-03 16:13:56 | INFO  | Live migration of 3929443f-3628-4012-bde4-e0b23f5774c3 (test) is still in progress 2025-06-03 16:13:58.933889 | orchestrator | 2025-06-03 16:13:58 | INFO  | Live migration of 3929443f-3628-4012-bde4-e0b23f5774c3 (test) is still in progress 2025-06-03 16:14:01.380190 | orchestrator | 2025-06-03 16:14:01 | INFO  | Live migration of 3929443f-3628-4012-bde4-e0b23f5774c3 (test) is still in progress 2025-06-03 16:14:03.737472 | orchestrator | 2025-06-03 16:14:03 | INFO  | Live migration of 3929443f-3628-4012-bde4-e0b23f5774c3 (test) is still in progress 2025-06-03 16:14:06.134811 | orchestrator | 2025-06-03 16:14:06 | INFO  | Live migration of 3929443f-3628-4012-bde4-e0b23f5774c3 (test) is still in progress 2025-06-03 16:14:08.513781 | orchestrator | 2025-06-03 16:14:08 | INFO  | Live migration of 3929443f-3628-4012-bde4-e0b23f5774c3 (test) is still in progress 2025-06-03 16:14:10.750093 | orchestrator | 2025-06-03 16:14:10 | INFO  | Live migration of 3929443f-3628-4012-bde4-e0b23f5774c3 (test) is still in progress 2025-06-03 16:14:13.082809 | orchestrator | 2025-06-03 16:14:13 | INFO  | Live migration of 3929443f-3628-4012-bde4-e0b23f5774c3 (test) completed with status ACTIVE 2025-06-03 16:14:13.321817 | orchestrator | + compute_list 2025-06-03 16:14:13.321918 | orchestrator | + osism manage compute list testbed-node-3 2025-06-03 16:14:16.257095 | orchestrator | +--------------------------------------+--------+----------+ 2025-06-03 16:14:16.257204 | orchestrator | | ID | Name | Status | 2025-06-03 16:14:16.257251 | orchestrator | |--------------------------------------+--------+----------| 2025-06-03 16:14:16.257264 | orchestrator | | 5e653e1e-35dd-4b03-89c6-c708eb2847c9 | test-4 | ACTIVE | 2025-06-03 16:14:16.257275 | orchestrator | | 2052164a-2dac-4ee8-b92c-4a1cdb4ffd9e | test-2 | ACTIVE | 2025-06-03 16:14:16.257287 | orchestrator | | 3929443f-3628-4012-bde4-e0b23f5774c3 | test | ACTIVE | 2025-06-03 16:14:16.257298 | orchestrator | +--------------------------------------+--------+----------+ 2025-06-03 16:14:16.491633 | orchestrator | + osism manage compute list testbed-node-4 2025-06-03 16:14:19.036234 | orchestrator | +------+--------+----------+ 2025-06-03 16:14:19.036376 | orchestrator | | ID | Name | Status | 2025-06-03 16:14:19.036392 | orchestrator | |------+--------+----------| 2025-06-03 16:14:19.036404 | orchestrator | +------+--------+----------+ 2025-06-03 16:14:19.288320 | orchestrator | + osism manage compute list testbed-node-5 2025-06-03 16:14:22.152187 | orchestrator | +--------------------------------------+--------+----------+ 2025-06-03 16:14:22.152280 | orchestrator | | ID | Name | Status | 2025-06-03 16:14:22.152287 | orchestrator | |--------------------------------------+--------+----------| 2025-06-03 16:14:22.152292 | orchestrator | | eab5b7a7-3709-466f-b8fc-a714b894dee2 | test-3 | ACTIVE | 2025-06-03 16:14:22.152298 | orchestrator | | 04a6f903-19e5-4edd-96da-17c2b3783884 | test-1 | ACTIVE | 2025-06-03 16:14:22.152303 | orchestrator | +--------------------------------------+--------+----------+ 2025-06-03 16:14:22.435076 | orchestrator | + server_ping 2025-06-03 16:14:22.435696 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2025-06-03 16:14:22.435971 | orchestrator | ++ tr -d '\r' 2025-06-03 16:14:25.352614 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-03 16:14:25.352720 | orchestrator | + ping -c3 192.168.112.125 2025-06-03 16:14:25.360905 | orchestrator | PING 192.168.112.125 (192.168.112.125) 56(84) bytes of data. 2025-06-03 16:14:25.360982 | orchestrator | 64 bytes from 192.168.112.125: icmp_seq=1 ttl=63 time=6.63 ms 2025-06-03 16:14:26.359275 | orchestrator | 64 bytes from 192.168.112.125: icmp_seq=2 ttl=63 time=2.75 ms 2025-06-03 16:14:27.361210 | orchestrator | 64 bytes from 192.168.112.125: icmp_seq=3 ttl=63 time=2.07 ms 2025-06-03 16:14:27.361309 | orchestrator | 2025-06-03 16:14:27.361322 | orchestrator | --- 192.168.112.125 ping statistics --- 2025-06-03 16:14:27.361333 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2025-06-03 16:14:27.361342 | orchestrator | rtt min/avg/max/mdev = 2.070/3.815/6.625/2.006 ms 2025-06-03 16:14:27.361352 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-03 16:14:27.361363 | orchestrator | + ping -c3 192.168.112.164 2025-06-03 16:14:27.374609 | orchestrator | PING 192.168.112.164 (192.168.112.164) 56(84) bytes of data. 2025-06-03 16:14:27.374699 | orchestrator | 64 bytes from 192.168.112.164: icmp_seq=1 ttl=63 time=8.81 ms 2025-06-03 16:14:28.369639 | orchestrator | 64 bytes from 192.168.112.164: icmp_seq=2 ttl=63 time=2.14 ms 2025-06-03 16:14:29.371150 | orchestrator | 64 bytes from 192.168.112.164: icmp_seq=3 ttl=63 time=1.72 ms 2025-06-03 16:14:29.371253 | orchestrator | 2025-06-03 16:14:29.371267 | orchestrator | --- 192.168.112.164 ping statistics --- 2025-06-03 16:14:29.371279 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-06-03 16:14:29.371291 | orchestrator | rtt min/avg/max/mdev = 1.716/4.221/8.810/3.249 ms 2025-06-03 16:14:29.371302 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-03 16:14:29.371314 | orchestrator | + ping -c3 192.168.112.138 2025-06-03 16:14:29.383958 | orchestrator | PING 192.168.112.138 (192.168.112.138) 56(84) bytes of data. 2025-06-03 16:14:29.384032 | orchestrator | 64 bytes from 192.168.112.138: icmp_seq=1 ttl=63 time=8.47 ms 2025-06-03 16:14:30.378779 | orchestrator | 64 bytes from 192.168.112.138: icmp_seq=2 ttl=63 time=1.78 ms 2025-06-03 16:14:31.381337 | orchestrator | 64 bytes from 192.168.112.138: icmp_seq=3 ttl=63 time=1.88 ms 2025-06-03 16:14:31.381424 | orchestrator | 2025-06-03 16:14:31.381435 | orchestrator | --- 192.168.112.138 ping statistics --- 2025-06-03 16:14:31.381445 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-06-03 16:14:31.381453 | orchestrator | rtt min/avg/max/mdev = 1.781/4.043/8.470/3.130 ms 2025-06-03 16:14:31.381539 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-03 16:14:31.381552 | orchestrator | + ping -c3 192.168.112.181 2025-06-03 16:14:31.392276 | orchestrator | PING 192.168.112.181 (192.168.112.181) 56(84) bytes of data. 2025-06-03 16:14:31.392348 | orchestrator | 64 bytes from 192.168.112.181: icmp_seq=1 ttl=63 time=6.26 ms 2025-06-03 16:14:32.388602 | orchestrator | 64 bytes from 192.168.112.181: icmp_seq=2 ttl=63 time=1.98 ms 2025-06-03 16:14:33.390347 | orchestrator | 64 bytes from 192.168.112.181: icmp_seq=3 ttl=63 time=1.81 ms 2025-06-03 16:14:33.390469 | orchestrator | 2025-06-03 16:14:33.390487 | orchestrator | --- 192.168.112.181 ping statistics --- 2025-06-03 16:14:33.391387 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2025-06-03 16:14:33.391482 | orchestrator | rtt min/avg/max/mdev = 1.812/3.350/6.255/2.055 ms 2025-06-03 16:14:33.391547 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-03 16:14:33.391561 | orchestrator | + ping -c3 192.168.112.100 2025-06-03 16:14:33.403346 | orchestrator | PING 192.168.112.100 (192.168.112.100) 56(84) bytes of data. 2025-06-03 16:14:33.403445 | orchestrator | 64 bytes from 192.168.112.100: icmp_seq=1 ttl=63 time=8.42 ms 2025-06-03 16:14:34.399026 | orchestrator | 64 bytes from 192.168.112.100: icmp_seq=2 ttl=63 time=2.33 ms 2025-06-03 16:14:35.401564 | orchestrator | 64 bytes from 192.168.112.100: icmp_seq=3 ttl=63 time=1.84 ms 2025-06-03 16:14:35.401667 | orchestrator | 2025-06-03 16:14:35.401679 | orchestrator | --- 192.168.112.100 ping statistics --- 2025-06-03 16:14:35.401687 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-06-03 16:14:35.401700 | orchestrator | rtt min/avg/max/mdev = 1.837/4.194/8.417/2.992 ms 2025-06-03 16:14:35.401805 | orchestrator | + osism manage compute migrate --yes --target testbed-node-3 testbed-node-5 2025-06-03 16:14:38.517948 | orchestrator | 2025-06-03 16:14:38 | INFO  | Live migrating server eab5b7a7-3709-466f-b8fc-a714b894dee2 2025-06-03 16:14:51.070752 | orchestrator | 2025-06-03 16:14:51 | INFO  | Live migration of eab5b7a7-3709-466f-b8fc-a714b894dee2 (test-3) is still in progress 2025-06-03 16:14:53.411588 | orchestrator | 2025-06-03 16:14:53 | INFO  | Live migration of eab5b7a7-3709-466f-b8fc-a714b894dee2 (test-3) is still in progress 2025-06-03 16:14:55.755928 | orchestrator | 2025-06-03 16:14:55 | INFO  | Live migration of eab5b7a7-3709-466f-b8fc-a714b894dee2 (test-3) is still in progress 2025-06-03 16:14:58.081378 | orchestrator | 2025-06-03 16:14:58 | INFO  | Live migration of eab5b7a7-3709-466f-b8fc-a714b894dee2 (test-3) is still in progress 2025-06-03 16:15:00.338665 | orchestrator | 2025-06-03 16:15:00 | INFO  | Live migration of eab5b7a7-3709-466f-b8fc-a714b894dee2 (test-3) is still in progress 2025-06-03 16:15:02.638669 | orchestrator | 2025-06-03 16:15:02 | INFO  | Live migration of eab5b7a7-3709-466f-b8fc-a714b894dee2 (test-3) is still in progress 2025-06-03 16:15:05.118796 | orchestrator | 2025-06-03 16:15:05 | INFO  | Live migration of eab5b7a7-3709-466f-b8fc-a714b894dee2 (test-3) is still in progress 2025-06-03 16:15:07.429481 | orchestrator | 2025-06-03 16:15:07 | INFO  | Live migration of eab5b7a7-3709-466f-b8fc-a714b894dee2 (test-3) completed with status ACTIVE 2025-06-03 16:15:07.429674 | orchestrator | 2025-06-03 16:15:07 | INFO  | Live migrating server 04a6f903-19e5-4edd-96da-17c2b3783884 2025-06-03 16:15:21.174590 | orchestrator | 2025-06-03 16:15:21 | INFO  | Live migration of 04a6f903-19e5-4edd-96da-17c2b3783884 (test-1) is still in progress 2025-06-03 16:15:23.518220 | orchestrator | 2025-06-03 16:15:23 | INFO  | Live migration of 04a6f903-19e5-4edd-96da-17c2b3783884 (test-1) is still in progress 2025-06-03 16:15:25.921879 | orchestrator | 2025-06-03 16:15:25 | INFO  | Live migration of 04a6f903-19e5-4edd-96da-17c2b3783884 (test-1) is still in progress 2025-06-03 16:15:28.186812 | orchestrator | 2025-06-03 16:15:28 | INFO  | Live migration of 04a6f903-19e5-4edd-96da-17c2b3783884 (test-1) is still in progress 2025-06-03 16:15:30.483375 | orchestrator | 2025-06-03 16:15:30 | INFO  | Live migration of 04a6f903-19e5-4edd-96da-17c2b3783884 (test-1) is still in progress 2025-06-03 16:15:32.873994 | orchestrator | 2025-06-03 16:15:32 | INFO  | Live migration of 04a6f903-19e5-4edd-96da-17c2b3783884 (test-1) is still in progress 2025-06-03 16:15:35.254944 | orchestrator | 2025-06-03 16:15:35 | INFO  | Live migration of 04a6f903-19e5-4edd-96da-17c2b3783884 (test-1) is still in progress 2025-06-03 16:15:37.636292 | orchestrator | 2025-06-03 16:15:37 | INFO  | Live migration of 04a6f903-19e5-4edd-96da-17c2b3783884 (test-1) completed with status ACTIVE 2025-06-03 16:15:37.863475 | orchestrator | + compute_list 2025-06-03 16:15:37.863625 | orchestrator | + osism manage compute list testbed-node-3 2025-06-03 16:15:40.931159 | orchestrator | +--------------------------------------+--------+----------+ 2025-06-03 16:15:40.931265 | orchestrator | | ID | Name | Status | 2025-06-03 16:15:40.931279 | orchestrator | |--------------------------------------+--------+----------| 2025-06-03 16:15:40.931291 | orchestrator | | 5e653e1e-35dd-4b03-89c6-c708eb2847c9 | test-4 | ACTIVE | 2025-06-03 16:15:40.931302 | orchestrator | | eab5b7a7-3709-466f-b8fc-a714b894dee2 | test-3 | ACTIVE | 2025-06-03 16:15:40.931313 | orchestrator | | 2052164a-2dac-4ee8-b92c-4a1cdb4ffd9e | test-2 | ACTIVE | 2025-06-03 16:15:40.931325 | orchestrator | | 04a6f903-19e5-4edd-96da-17c2b3783884 | test-1 | ACTIVE | 2025-06-03 16:15:40.931336 | orchestrator | | 3929443f-3628-4012-bde4-e0b23f5774c3 | test | ACTIVE | 2025-06-03 16:15:40.931347 | orchestrator | +--------------------------------------+--------+----------+ 2025-06-03 16:15:41.180268 | orchestrator | + osism manage compute list testbed-node-4 2025-06-03 16:15:43.696425 | orchestrator | +------+--------+----------+ 2025-06-03 16:15:43.696587 | orchestrator | | ID | Name | Status | 2025-06-03 16:15:43.696603 | orchestrator | |------+--------+----------| 2025-06-03 16:15:43.696615 | orchestrator | +------+--------+----------+ 2025-06-03 16:15:43.958141 | orchestrator | + osism manage compute list testbed-node-5 2025-06-03 16:15:46.480729 | orchestrator | +------+--------+----------+ 2025-06-03 16:15:46.480834 | orchestrator | | ID | Name | Status | 2025-06-03 16:15:46.480848 | orchestrator | |------+--------+----------| 2025-06-03 16:15:46.480860 | orchestrator | +------+--------+----------+ 2025-06-03 16:15:46.736835 | orchestrator | + server_ping 2025-06-03 16:15:46.738130 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2025-06-03 16:15:46.738470 | orchestrator | ++ tr -d '\r' 2025-06-03 16:15:49.584874 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-03 16:15:49.584973 | orchestrator | + ping -c3 192.168.112.125 2025-06-03 16:15:49.600561 | orchestrator | PING 192.168.112.125 (192.168.112.125) 56(84) bytes of data. 2025-06-03 16:15:49.600644 | orchestrator | 64 bytes from 192.168.112.125: icmp_seq=1 ttl=63 time=10.9 ms 2025-06-03 16:15:50.592274 | orchestrator | 64 bytes from 192.168.112.125: icmp_seq=2 ttl=63 time=2.32 ms 2025-06-03 16:15:51.594267 | orchestrator | 64 bytes from 192.168.112.125: icmp_seq=3 ttl=63 time=2.27 ms 2025-06-03 16:15:51.594365 | orchestrator | 2025-06-03 16:15:51.594380 | orchestrator | --- 192.168.112.125 ping statistics --- 2025-06-03 16:15:51.594398 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-06-03 16:15:51.594417 | orchestrator | rtt min/avg/max/mdev = 2.270/5.178/10.942/4.075 ms 2025-06-03 16:15:51.594441 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-03 16:15:51.594470 | orchestrator | + ping -c3 192.168.112.164 2025-06-03 16:15:51.604568 | orchestrator | PING 192.168.112.164 (192.168.112.164) 56(84) bytes of data. 2025-06-03 16:15:51.604621 | orchestrator | 64 bytes from 192.168.112.164: icmp_seq=1 ttl=63 time=6.44 ms 2025-06-03 16:15:52.603030 | orchestrator | 64 bytes from 192.168.112.164: icmp_seq=2 ttl=63 time=2.81 ms 2025-06-03 16:15:53.605198 | orchestrator | 64 bytes from 192.168.112.164: icmp_seq=3 ttl=63 time=2.25 ms 2025-06-03 16:15:53.605290 | orchestrator | 2025-06-03 16:15:53.605305 | orchestrator | --- 192.168.112.164 ping statistics --- 2025-06-03 16:15:53.605318 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2025-06-03 16:15:53.605363 | orchestrator | rtt min/avg/max/mdev = 2.253/3.835/6.440/1.856 ms 2025-06-03 16:15:53.605376 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-03 16:15:53.605389 | orchestrator | + ping -c3 192.168.112.138 2025-06-03 16:15:53.620014 | orchestrator | PING 192.168.112.138 (192.168.112.138) 56(84) bytes of data. 2025-06-03 16:15:53.620085 | orchestrator | 64 bytes from 192.168.112.138: icmp_seq=1 ttl=63 time=9.96 ms 2025-06-03 16:15:54.613481 | orchestrator | 64 bytes from 192.168.112.138: icmp_seq=2 ttl=63 time=2.92 ms 2025-06-03 16:15:55.613850 | orchestrator | 64 bytes from 192.168.112.138: icmp_seq=3 ttl=63 time=1.81 ms 2025-06-03 16:15:55.613971 | orchestrator | 2025-06-03 16:15:55.613986 | orchestrator | --- 192.168.112.138 ping statistics --- 2025-06-03 16:15:55.613999 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2025-06-03 16:15:55.614010 | orchestrator | rtt min/avg/max/mdev = 1.814/4.895/9.957/3.607 ms 2025-06-03 16:15:55.614265 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-03 16:15:55.614285 | orchestrator | + ping -c3 192.168.112.181 2025-06-03 16:15:55.625697 | orchestrator | PING 192.168.112.181 (192.168.112.181) 56(84) bytes of data. 2025-06-03 16:15:55.625794 | orchestrator | 64 bytes from 192.168.112.181: icmp_seq=1 ttl=63 time=6.39 ms 2025-06-03 16:15:56.623664 | orchestrator | 64 bytes from 192.168.112.181: icmp_seq=2 ttl=63 time=2.54 ms 2025-06-03 16:15:57.626651 | orchestrator | 64 bytes from 192.168.112.181: icmp_seq=3 ttl=63 time=2.53 ms 2025-06-03 16:15:57.626740 | orchestrator | 2025-06-03 16:15:57.626751 | orchestrator | --- 192.168.112.181 ping statistics --- 2025-06-03 16:15:57.626760 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-06-03 16:15:57.626768 | orchestrator | rtt min/avg/max/mdev = 2.533/3.823/6.393/1.817 ms 2025-06-03 16:15:57.626777 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-03 16:15:57.626785 | orchestrator | + ping -c3 192.168.112.100 2025-06-03 16:15:57.637126 | orchestrator | PING 192.168.112.100 (192.168.112.100) 56(84) bytes of data. 2025-06-03 16:15:57.637214 | orchestrator | 64 bytes from 192.168.112.100: icmp_seq=1 ttl=63 time=6.09 ms 2025-06-03 16:15:58.634843 | orchestrator | 64 bytes from 192.168.112.100: icmp_seq=2 ttl=63 time=2.46 ms 2025-06-03 16:15:59.635690 | orchestrator | 64 bytes from 192.168.112.100: icmp_seq=3 ttl=63 time=1.86 ms 2025-06-03 16:15:59.635893 | orchestrator | 2025-06-03 16:15:59.635957 | orchestrator | --- 192.168.112.100 ping statistics --- 2025-06-03 16:15:59.635974 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2025-06-03 16:15:59.635986 | orchestrator | rtt min/avg/max/mdev = 1.858/3.468/6.088/1.868 ms 2025-06-03 16:15:59.636093 | orchestrator | + osism manage compute migrate --yes --target testbed-node-4 testbed-node-3 2025-06-03 16:16:02.784818 | orchestrator | 2025-06-03 16:16:02 | INFO  | Live migrating server 5e653e1e-35dd-4b03-89c6-c708eb2847c9 2025-06-03 16:16:15.058731 | orchestrator | 2025-06-03 16:16:15 | INFO  | Live migration of 5e653e1e-35dd-4b03-89c6-c708eb2847c9 (test-4) is still in progress 2025-06-03 16:16:17.408144 | orchestrator | 2025-06-03 16:16:17 | INFO  | Live migration of 5e653e1e-35dd-4b03-89c6-c708eb2847c9 (test-4) is still in progress 2025-06-03 16:16:19.755420 | orchestrator | 2025-06-03 16:16:19 | INFO  | Live migration of 5e653e1e-35dd-4b03-89c6-c708eb2847c9 (test-4) is still in progress 2025-06-03 16:16:22.117555 | orchestrator | 2025-06-03 16:16:22 | INFO  | Live migration of 5e653e1e-35dd-4b03-89c6-c708eb2847c9 (test-4) is still in progress 2025-06-03 16:16:24.394982 | orchestrator | 2025-06-03 16:16:24 | INFO  | Live migration of 5e653e1e-35dd-4b03-89c6-c708eb2847c9 (test-4) is still in progress 2025-06-03 16:16:26.664610 | orchestrator | 2025-06-03 16:16:26 | INFO  | Live migration of 5e653e1e-35dd-4b03-89c6-c708eb2847c9 (test-4) is still in progress 2025-06-03 16:16:28.930194 | orchestrator | 2025-06-03 16:16:28 | INFO  | Live migration of 5e653e1e-35dd-4b03-89c6-c708eb2847c9 (test-4) is still in progress 2025-06-03 16:16:31.235463 | orchestrator | 2025-06-03 16:16:31 | INFO  | Live migration of 5e653e1e-35dd-4b03-89c6-c708eb2847c9 (test-4) completed with status ACTIVE 2025-06-03 16:16:31.235564 | orchestrator | 2025-06-03 16:16:31 | INFO  | Live migrating server eab5b7a7-3709-466f-b8fc-a714b894dee2 2025-06-03 16:16:41.937299 | orchestrator | 2025-06-03 16:16:41 | INFO  | Live migration of eab5b7a7-3709-466f-b8fc-a714b894dee2 (test-3) is still in progress 2025-06-03 16:16:44.296915 | orchestrator | 2025-06-03 16:16:44 | INFO  | Live migration of eab5b7a7-3709-466f-b8fc-a714b894dee2 (test-3) is still in progress 2025-06-03 16:16:46.674306 | orchestrator | 2025-06-03 16:16:46 | INFO  | Live migration of eab5b7a7-3709-466f-b8fc-a714b894dee2 (test-3) is still in progress 2025-06-03 16:16:48.937068 | orchestrator | 2025-06-03 16:16:48 | INFO  | Live migration of eab5b7a7-3709-466f-b8fc-a714b894dee2 (test-3) is still in progress 2025-06-03 16:16:51.229776 | orchestrator | 2025-06-03 16:16:51 | INFO  | Live migration of eab5b7a7-3709-466f-b8fc-a714b894dee2 (test-3) is still in progress 2025-06-03 16:16:53.601864 | orchestrator | 2025-06-03 16:16:53 | INFO  | Live migration of eab5b7a7-3709-466f-b8fc-a714b894dee2 (test-3) is still in progress 2025-06-03 16:16:55.864470 | orchestrator | 2025-06-03 16:16:55 | INFO  | Live migration of eab5b7a7-3709-466f-b8fc-a714b894dee2 (test-3) is still in progress 2025-06-03 16:16:58.167696 | orchestrator | 2025-06-03 16:16:58 | INFO  | Live migration of eab5b7a7-3709-466f-b8fc-a714b894dee2 (test-3) completed with status ACTIVE 2025-06-03 16:16:58.167801 | orchestrator | 2025-06-03 16:16:58 | INFO  | Live migrating server 2052164a-2dac-4ee8-b92c-4a1cdb4ffd9e 2025-06-03 16:17:08.919916 | orchestrator | 2025-06-03 16:17:08 | INFO  | Live migration of 2052164a-2dac-4ee8-b92c-4a1cdb4ffd9e (test-2) is still in progress 2025-06-03 16:17:11.289626 | orchestrator | 2025-06-03 16:17:11 | INFO  | Live migration of 2052164a-2dac-4ee8-b92c-4a1cdb4ffd9e (test-2) is still in progress 2025-06-03 16:17:13.615111 | orchestrator | 2025-06-03 16:17:13 | INFO  | Live migration of 2052164a-2dac-4ee8-b92c-4a1cdb4ffd9e (test-2) is still in progress 2025-06-03 16:17:15.880368 | orchestrator | 2025-06-03 16:17:15 | INFO  | Live migration of 2052164a-2dac-4ee8-b92c-4a1cdb4ffd9e (test-2) is still in progress 2025-06-03 16:17:18.276056 | orchestrator | 2025-06-03 16:17:18 | INFO  | Live migration of 2052164a-2dac-4ee8-b92c-4a1cdb4ffd9e (test-2) is still in progress 2025-06-03 16:17:20.518995 | orchestrator | 2025-06-03 16:17:20 | INFO  | Live migration of 2052164a-2dac-4ee8-b92c-4a1cdb4ffd9e (test-2) is still in progress 2025-06-03 16:17:22.801822 | orchestrator | 2025-06-03 16:17:22 | INFO  | Live migration of 2052164a-2dac-4ee8-b92c-4a1cdb4ffd9e (test-2) is still in progress 2025-06-03 16:17:25.158706 | orchestrator | 2025-06-03 16:17:25 | INFO  | Live migration of 2052164a-2dac-4ee8-b92c-4a1cdb4ffd9e (test-2) completed with status ACTIVE 2025-06-03 16:17:25.158818 | orchestrator | 2025-06-03 16:17:25 | INFO  | Live migrating server 04a6f903-19e5-4edd-96da-17c2b3783884 2025-06-03 16:17:36.771125 | orchestrator | 2025-06-03 16:17:36 | INFO  | Live migration of 04a6f903-19e5-4edd-96da-17c2b3783884 (test-1) is still in progress 2025-06-03 16:17:39.143024 | orchestrator | 2025-06-03 16:17:39 | INFO  | Live migration of 04a6f903-19e5-4edd-96da-17c2b3783884 (test-1) is still in progress 2025-06-03 16:17:41.527844 | orchestrator | 2025-06-03 16:17:41 | INFO  | Live migration of 04a6f903-19e5-4edd-96da-17c2b3783884 (test-1) is still in progress 2025-06-03 16:17:43.806916 | orchestrator | 2025-06-03 16:17:43 | INFO  | Live migration of 04a6f903-19e5-4edd-96da-17c2b3783884 (test-1) is still in progress 2025-06-03 16:17:46.169105 | orchestrator | 2025-06-03 16:17:46 | INFO  | Live migration of 04a6f903-19e5-4edd-96da-17c2b3783884 (test-1) is still in progress 2025-06-03 16:17:48.436805 | orchestrator | 2025-06-03 16:17:48 | INFO  | Live migration of 04a6f903-19e5-4edd-96da-17c2b3783884 (test-1) is still in progress 2025-06-03 16:17:50.738478 | orchestrator | 2025-06-03 16:17:50 | INFO  | Live migration of 04a6f903-19e5-4edd-96da-17c2b3783884 (test-1) is still in progress 2025-06-03 16:17:53.085854 | orchestrator | 2025-06-03 16:17:53 | INFO  | Live migration of 04a6f903-19e5-4edd-96da-17c2b3783884 (test-1) is still in progress 2025-06-03 16:17:55.325985 | orchestrator | 2025-06-03 16:17:55 | INFO  | Live migration of 04a6f903-19e5-4edd-96da-17c2b3783884 (test-1) completed with status ACTIVE 2025-06-03 16:17:55.326191 | orchestrator | 2025-06-03 16:17:55 | INFO  | Live migrating server 3929443f-3628-4012-bde4-e0b23f5774c3 2025-06-03 16:18:05.229802 | orchestrator | 2025-06-03 16:18:05 | INFO  | Live migration of 3929443f-3628-4012-bde4-e0b23f5774c3 (test) is still in progress 2025-06-03 16:18:07.557824 | orchestrator | 2025-06-03 16:18:07 | INFO  | Live migration of 3929443f-3628-4012-bde4-e0b23f5774c3 (test) is still in progress 2025-06-03 16:18:09.916280 | orchestrator | 2025-06-03 16:18:09 | INFO  | Live migration of 3929443f-3628-4012-bde4-e0b23f5774c3 (test) is still in progress 2025-06-03 16:18:12.256730 | orchestrator | 2025-06-03 16:18:12 | INFO  | Live migration of 3929443f-3628-4012-bde4-e0b23f5774c3 (test) is still in progress 2025-06-03 16:18:14.548204 | orchestrator | 2025-06-03 16:18:14 | INFO  | Live migration of 3929443f-3628-4012-bde4-e0b23f5774c3 (test) is still in progress 2025-06-03 16:18:16.841020 | orchestrator | 2025-06-03 16:18:16 | INFO  | Live migration of 3929443f-3628-4012-bde4-e0b23f5774c3 (test) is still in progress 2025-06-03 16:18:19.163475 | orchestrator | 2025-06-03 16:18:19 | INFO  | Live migration of 3929443f-3628-4012-bde4-e0b23f5774c3 (test) is still in progress 2025-06-03 16:18:21.499730 | orchestrator | 2025-06-03 16:18:21 | INFO  | Live migration of 3929443f-3628-4012-bde4-e0b23f5774c3 (test) is still in progress 2025-06-03 16:18:23.852192 | orchestrator | 2025-06-03 16:18:23 | INFO  | Live migration of 3929443f-3628-4012-bde4-e0b23f5774c3 (test) completed with status ACTIVE 2025-06-03 16:18:24.097448 | orchestrator | + compute_list 2025-06-03 16:18:24.097547 | orchestrator | + osism manage compute list testbed-node-3 2025-06-03 16:18:26.769181 | orchestrator | +------+--------+----------+ 2025-06-03 16:18:26.769289 | orchestrator | | ID | Name | Status | 2025-06-03 16:18:26.769305 | orchestrator | |------+--------+----------| 2025-06-03 16:18:26.769316 | orchestrator | +------+--------+----------+ 2025-06-03 16:18:27.008133 | orchestrator | + osism manage compute list testbed-node-4 2025-06-03 16:18:30.020039 | orchestrator | +--------------------------------------+--------+----------+ 2025-06-03 16:18:30.020162 | orchestrator | | ID | Name | Status | 2025-06-03 16:18:30.020183 | orchestrator | |--------------------------------------+--------+----------| 2025-06-03 16:18:30.020199 | orchestrator | | 5e653e1e-35dd-4b03-89c6-c708eb2847c9 | test-4 | ACTIVE | 2025-06-03 16:18:30.020214 | orchestrator | | eab5b7a7-3709-466f-b8fc-a714b894dee2 | test-3 | ACTIVE | 2025-06-03 16:18:30.020228 | orchestrator | | 2052164a-2dac-4ee8-b92c-4a1cdb4ffd9e | test-2 | ACTIVE | 2025-06-03 16:18:30.020266 | orchestrator | | 04a6f903-19e5-4edd-96da-17c2b3783884 | test-1 | ACTIVE | 2025-06-03 16:18:30.020282 | orchestrator | | 3929443f-3628-4012-bde4-e0b23f5774c3 | test | ACTIVE | 2025-06-03 16:18:30.020296 | orchestrator | +--------------------------------------+--------+----------+ 2025-06-03 16:18:30.277977 | orchestrator | + osism manage compute list testbed-node-5 2025-06-03 16:18:32.827645 | orchestrator | +------+--------+----------+ 2025-06-03 16:18:32.827755 | orchestrator | | ID | Name | Status | 2025-06-03 16:18:32.827770 | orchestrator | |------+--------+----------| 2025-06-03 16:18:32.827783 | orchestrator | +------+--------+----------+ 2025-06-03 16:18:33.052719 | orchestrator | + server_ping 2025-06-03 16:18:33.053995 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2025-06-03 16:18:33.054074 | orchestrator | ++ tr -d '\r' 2025-06-03 16:18:36.012774 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-03 16:18:36.012856 | orchestrator | + ping -c3 192.168.112.125 2025-06-03 16:18:36.025537 | orchestrator | PING 192.168.112.125 (192.168.112.125) 56(84) bytes of data. 2025-06-03 16:18:36.025640 | orchestrator | 64 bytes from 192.168.112.125: icmp_seq=1 ttl=63 time=9.14 ms 2025-06-03 16:18:37.020922 | orchestrator | 64 bytes from 192.168.112.125: icmp_seq=2 ttl=63 time=3.06 ms 2025-06-03 16:18:38.023393 | orchestrator | 64 bytes from 192.168.112.125: icmp_seq=3 ttl=63 time=2.12 ms 2025-06-03 16:18:38.023512 | orchestrator | 2025-06-03 16:18:38.023538 | orchestrator | --- 192.168.112.125 ping statistics --- 2025-06-03 16:18:38.023560 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2025-06-03 16:18:38.023606 | orchestrator | rtt min/avg/max/mdev = 2.122/4.773/9.135/3.107 ms 2025-06-03 16:18:38.023627 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-03 16:18:38.023648 | orchestrator | + ping -c3 192.168.112.164 2025-06-03 16:18:38.035993 | orchestrator | PING 192.168.112.164 (192.168.112.164) 56(84) bytes of data. 2025-06-03 16:18:38.036094 | orchestrator | 64 bytes from 192.168.112.164: icmp_seq=1 ttl=63 time=6.32 ms 2025-06-03 16:18:39.031877 | orchestrator | 64 bytes from 192.168.112.164: icmp_seq=2 ttl=63 time=2.42 ms 2025-06-03 16:18:40.034473 | orchestrator | 64 bytes from 192.168.112.164: icmp_seq=3 ttl=63 time=2.36 ms 2025-06-03 16:18:40.034554 | orchestrator | 2025-06-03 16:18:40.034562 | orchestrator | --- 192.168.112.164 ping statistics --- 2025-06-03 16:18:40.034568 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-06-03 16:18:40.034622 | orchestrator | rtt min/avg/max/mdev = 2.357/3.697/6.315/1.851 ms 2025-06-03 16:18:40.034677 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-03 16:18:40.034683 | orchestrator | + ping -c3 192.168.112.138 2025-06-03 16:18:40.045023 | orchestrator | PING 192.168.112.138 (192.168.112.138) 56(84) bytes of data. 2025-06-03 16:18:40.045104 | orchestrator | 64 bytes from 192.168.112.138: icmp_seq=1 ttl=63 time=7.26 ms 2025-06-03 16:18:41.041117 | orchestrator | 64 bytes from 192.168.112.138: icmp_seq=2 ttl=63 time=2.42 ms 2025-06-03 16:18:42.042892 | orchestrator | 64 bytes from 192.168.112.138: icmp_seq=3 ttl=63 time=2.27 ms 2025-06-03 16:18:42.043000 | orchestrator | 2025-06-03 16:18:42.043016 | orchestrator | --- 192.168.112.138 ping statistics --- 2025-06-03 16:18:42.043029 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2025-06-03 16:18:42.043040 | orchestrator | rtt min/avg/max/mdev = 2.270/3.983/7.263/2.320 ms 2025-06-03 16:18:42.043876 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-03 16:18:42.043902 | orchestrator | + ping -c3 192.168.112.181 2025-06-03 16:18:42.056090 | orchestrator | PING 192.168.112.181 (192.168.112.181) 56(84) bytes of data. 2025-06-03 16:18:42.056174 | orchestrator | 64 bytes from 192.168.112.181: icmp_seq=1 ttl=63 time=7.61 ms 2025-06-03 16:18:43.052670 | orchestrator | 64 bytes from 192.168.112.181: icmp_seq=2 ttl=63 time=2.07 ms 2025-06-03 16:18:44.053131 | orchestrator | 64 bytes from 192.168.112.181: icmp_seq=3 ttl=63 time=1.66 ms 2025-06-03 16:18:44.053221 | orchestrator | 2025-06-03 16:18:44.053240 | orchestrator | --- 192.168.112.181 ping statistics --- 2025-06-03 16:18:44.053257 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2025-06-03 16:18:44.053272 | orchestrator | rtt min/avg/max/mdev = 1.655/3.776/7.605/2.712 ms 2025-06-03 16:18:44.053286 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-03 16:18:44.053301 | orchestrator | + ping -c3 192.168.112.100 2025-06-03 16:18:44.064741 | orchestrator | PING 192.168.112.100 (192.168.112.100) 56(84) bytes of data. 2025-06-03 16:18:44.064846 | orchestrator | 64 bytes from 192.168.112.100: icmp_seq=1 ttl=63 time=7.18 ms 2025-06-03 16:18:45.061363 | orchestrator | 64 bytes from 192.168.112.100: icmp_seq=2 ttl=63 time=2.56 ms 2025-06-03 16:18:46.063177 | orchestrator | 64 bytes from 192.168.112.100: icmp_seq=3 ttl=63 time=2.12 ms 2025-06-03 16:18:46.063295 | orchestrator | 2025-06-03 16:18:46.063308 | orchestrator | --- 192.168.112.100 ping statistics --- 2025-06-03 16:18:46.063318 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-06-03 16:18:46.063326 | orchestrator | rtt min/avg/max/mdev = 2.124/3.956/7.184/2.289 ms 2025-06-03 16:18:46.063680 | orchestrator | + osism manage compute migrate --yes --target testbed-node-5 testbed-node-4 2025-06-03 16:18:49.263987 | orchestrator | 2025-06-03 16:18:49 | INFO  | Live migrating server 5e653e1e-35dd-4b03-89c6-c708eb2847c9 2025-06-03 16:18:59.261350 | orchestrator | 2025-06-03 16:18:59 | INFO  | Live migration of 5e653e1e-35dd-4b03-89c6-c708eb2847c9 (test-4) is still in progress 2025-06-03 16:19:01.678006 | orchestrator | 2025-06-03 16:19:01 | INFO  | Live migration of 5e653e1e-35dd-4b03-89c6-c708eb2847c9 (test-4) is still in progress 2025-06-03 16:19:04.109433 | orchestrator | 2025-06-03 16:19:04 | INFO  | Live migration of 5e653e1e-35dd-4b03-89c6-c708eb2847c9 (test-4) is still in progress 2025-06-03 16:19:06.458286 | orchestrator | 2025-06-03 16:19:06 | INFO  | Live migration of 5e653e1e-35dd-4b03-89c6-c708eb2847c9 (test-4) is still in progress 2025-06-03 16:19:08.906106 | orchestrator | 2025-06-03 16:19:08 | INFO  | Live migration of 5e653e1e-35dd-4b03-89c6-c708eb2847c9 (test-4) is still in progress 2025-06-03 16:19:11.190417 | orchestrator | 2025-06-03 16:19:11 | INFO  | Live migration of 5e653e1e-35dd-4b03-89c6-c708eb2847c9 (test-4) is still in progress 2025-06-03 16:19:13.485248 | orchestrator | 2025-06-03 16:19:13 | INFO  | Live migration of 5e653e1e-35dd-4b03-89c6-c708eb2847c9 (test-4) completed with status ACTIVE 2025-06-03 16:19:13.485379 | orchestrator | 2025-06-03 16:19:13 | INFO  | Live migrating server eab5b7a7-3709-466f-b8fc-a714b894dee2 2025-06-03 16:19:23.614546 | orchestrator | 2025-06-03 16:19:23 | INFO  | Live migration of eab5b7a7-3709-466f-b8fc-a714b894dee2 (test-3) is still in progress 2025-06-03 16:19:26.032351 | orchestrator | 2025-06-03 16:19:26 | INFO  | Live migration of eab5b7a7-3709-466f-b8fc-a714b894dee2 (test-3) is still in progress 2025-06-03 16:19:28.406322 | orchestrator | 2025-06-03 16:19:28 | INFO  | Live migration of eab5b7a7-3709-466f-b8fc-a714b894dee2 (test-3) is still in progress 2025-06-03 16:19:30.650367 | orchestrator | 2025-06-03 16:19:30 | INFO  | Live migration of eab5b7a7-3709-466f-b8fc-a714b894dee2 (test-3) is still in progress 2025-06-03 16:19:32.895590 | orchestrator | 2025-06-03 16:19:32 | INFO  | Live migration of eab5b7a7-3709-466f-b8fc-a714b894dee2 (test-3) is still in progress 2025-06-03 16:19:35.170311 | orchestrator | 2025-06-03 16:19:35 | INFO  | Live migration of eab5b7a7-3709-466f-b8fc-a714b894dee2 (test-3) is still in progress 2025-06-03 16:19:37.468972 | orchestrator | 2025-06-03 16:19:37 | INFO  | Live migration of eab5b7a7-3709-466f-b8fc-a714b894dee2 (test-3) is still in progress 2025-06-03 16:19:39.858793 | orchestrator | 2025-06-03 16:19:39 | INFO  | Live migration of eab5b7a7-3709-466f-b8fc-a714b894dee2 (test-3) completed with status ACTIVE 2025-06-03 16:19:39.858896 | orchestrator | 2025-06-03 16:19:39 | INFO  | Live migrating server 2052164a-2dac-4ee8-b92c-4a1cdb4ffd9e 2025-06-03 16:19:49.512420 | orchestrator | 2025-06-03 16:19:49 | INFO  | Live migration of 2052164a-2dac-4ee8-b92c-4a1cdb4ffd9e (test-2) is still in progress 2025-06-03 16:19:51.834557 | orchestrator | 2025-06-03 16:19:51 | INFO  | Live migration of 2052164a-2dac-4ee8-b92c-4a1cdb4ffd9e (test-2) is still in progress 2025-06-03 16:19:54.235077 | orchestrator | 2025-06-03 16:19:54 | INFO  | Live migration of 2052164a-2dac-4ee8-b92c-4a1cdb4ffd9e (test-2) is still in progress 2025-06-03 16:19:56.498491 | orchestrator | 2025-06-03 16:19:56 | INFO  | Live migration of 2052164a-2dac-4ee8-b92c-4a1cdb4ffd9e (test-2) is still in progress 2025-06-03 16:19:58.843511 | orchestrator | 2025-06-03 16:19:58 | INFO  | Live migration of 2052164a-2dac-4ee8-b92c-4a1cdb4ffd9e (test-2) is still in progress 2025-06-03 16:20:01.141481 | orchestrator | 2025-06-03 16:20:01 | INFO  | Live migration of 2052164a-2dac-4ee8-b92c-4a1cdb4ffd9e (test-2) is still in progress 2025-06-03 16:20:03.465618 | orchestrator | 2025-06-03 16:20:03 | INFO  | Live migration of 2052164a-2dac-4ee8-b92c-4a1cdb4ffd9e (test-2) completed with status ACTIVE 2025-06-03 16:20:03.465820 | orchestrator | 2025-06-03 16:20:03 | INFO  | Live migrating server 04a6f903-19e5-4edd-96da-17c2b3783884 2025-06-03 16:20:15.193456 | orchestrator | 2025-06-03 16:20:15 | INFO  | Live migration of 04a6f903-19e5-4edd-96da-17c2b3783884 (test-1) is still in progress 2025-06-03 16:20:17.556114 | orchestrator | 2025-06-03 16:20:17 | INFO  | Live migration of 04a6f903-19e5-4edd-96da-17c2b3783884 (test-1) is still in progress 2025-06-03 16:20:19.909125 | orchestrator | 2025-06-03 16:20:19 | INFO  | Live migration of 04a6f903-19e5-4edd-96da-17c2b3783884 (test-1) is still in progress 2025-06-03 16:20:22.354715 | orchestrator | 2025-06-03 16:20:22 | INFO  | Live migration of 04a6f903-19e5-4edd-96da-17c2b3783884 (test-1) is still in progress 2025-06-03 16:20:24.618308 | orchestrator | 2025-06-03 16:20:24 | INFO  | Live migration of 04a6f903-19e5-4edd-96da-17c2b3783884 (test-1) is still in progress 2025-06-03 16:20:26.979007 | orchestrator | 2025-06-03 16:20:26 | INFO  | Live migration of 04a6f903-19e5-4edd-96da-17c2b3783884 (test-1) is still in progress 2025-06-03 16:20:29.288704 | orchestrator | 2025-06-03 16:20:29 | INFO  | Live migration of 04a6f903-19e5-4edd-96da-17c2b3783884 (test-1) is still in progress 2025-06-03 16:20:31.671996 | orchestrator | 2025-06-03 16:20:31 | INFO  | Live migration of 04a6f903-19e5-4edd-96da-17c2b3783884 (test-1) completed with status ACTIVE 2025-06-03 16:20:31.672069 | orchestrator | 2025-06-03 16:20:31 | INFO  | Live migrating server 3929443f-3628-4012-bde4-e0b23f5774c3 2025-06-03 16:20:41.932790 | orchestrator | 2025-06-03 16:20:41 | INFO  | Live migration of 3929443f-3628-4012-bde4-e0b23f5774c3 (test) is still in progress 2025-06-03 16:20:44.323761 | orchestrator | 2025-06-03 16:20:44 | INFO  | Live migration of 3929443f-3628-4012-bde4-e0b23f5774c3 (test) is still in progress 2025-06-03 16:20:46.707068 | orchestrator | 2025-06-03 16:20:46 | INFO  | Live migration of 3929443f-3628-4012-bde4-e0b23f5774c3 (test) is still in progress 2025-06-03 16:20:49.138851 | orchestrator | 2025-06-03 16:20:49 | INFO  | Live migration of 3929443f-3628-4012-bde4-e0b23f5774c3 (test) is still in progress 2025-06-03 16:20:51.639643 | orchestrator | 2025-06-03 16:20:51 | INFO  | Live migration of 3929443f-3628-4012-bde4-e0b23f5774c3 (test) is still in progress 2025-06-03 16:20:53.991415 | orchestrator | 2025-06-03 16:20:53 | INFO  | Live migration of 3929443f-3628-4012-bde4-e0b23f5774c3 (test) is still in progress 2025-06-03 16:20:56.332708 | orchestrator | 2025-06-03 16:20:56 | INFO  | Live migration of 3929443f-3628-4012-bde4-e0b23f5774c3 (test) is still in progress 2025-06-03 16:20:58.668059 | orchestrator | 2025-06-03 16:20:58 | INFO  | Live migration of 3929443f-3628-4012-bde4-e0b23f5774c3 (test) is still in progress 2025-06-03 16:21:00.966142 | orchestrator | 2025-06-03 16:21:00 | INFO  | Live migration of 3929443f-3628-4012-bde4-e0b23f5774c3 (test) is still in progress 2025-06-03 16:21:03.349648 | orchestrator | 2025-06-03 16:21:03 | INFO  | Live migration of 3929443f-3628-4012-bde4-e0b23f5774c3 (test) completed with status ACTIVE 2025-06-03 16:21:03.599242 | orchestrator | + compute_list 2025-06-03 16:21:03.599348 | orchestrator | + osism manage compute list testbed-node-3 2025-06-03 16:21:06.088894 | orchestrator | +------+--------+----------+ 2025-06-03 16:21:06.089034 | orchestrator | | ID | Name | Status | 2025-06-03 16:21:06.089051 | orchestrator | |------+--------+----------| 2025-06-03 16:21:06.089063 | orchestrator | +------+--------+----------+ 2025-06-03 16:21:06.334878 | orchestrator | + osism manage compute list testbed-node-4 2025-06-03 16:21:08.787524 | orchestrator | +------+--------+----------+ 2025-06-03 16:21:08.787625 | orchestrator | | ID | Name | Status | 2025-06-03 16:21:08.787639 | orchestrator | |------+--------+----------| 2025-06-03 16:21:08.787663 | orchestrator | +------+--------+----------+ 2025-06-03 16:21:09.029432 | orchestrator | + osism manage compute list testbed-node-5 2025-06-03 16:21:12.038552 | orchestrator | +--------------------------------------+--------+----------+ 2025-06-03 16:21:12.038646 | orchestrator | | ID | Name | Status | 2025-06-03 16:21:12.038657 | orchestrator | |--------------------------------------+--------+----------| 2025-06-03 16:21:12.038666 | orchestrator | | 5e653e1e-35dd-4b03-89c6-c708eb2847c9 | test-4 | ACTIVE | 2025-06-03 16:21:12.038674 | orchestrator | | eab5b7a7-3709-466f-b8fc-a714b894dee2 | test-3 | ACTIVE | 2025-06-03 16:21:12.038709 | orchestrator | | 2052164a-2dac-4ee8-b92c-4a1cdb4ffd9e | test-2 | ACTIVE | 2025-06-03 16:21:12.038717 | orchestrator | | 04a6f903-19e5-4edd-96da-17c2b3783884 | test-1 | ACTIVE | 2025-06-03 16:21:12.038725 | orchestrator | | 3929443f-3628-4012-bde4-e0b23f5774c3 | test | ACTIVE | 2025-06-03 16:21:12.038734 | orchestrator | +--------------------------------------+--------+----------+ 2025-06-03 16:21:12.264742 | orchestrator | + server_ping 2025-06-03 16:21:12.266330 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2025-06-03 16:21:12.266883 | orchestrator | ++ tr -d '\r' 2025-06-03 16:21:15.071173 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-03 16:21:15.071272 | orchestrator | + ping -c3 192.168.112.125 2025-06-03 16:21:15.085525 | orchestrator | PING 192.168.112.125 (192.168.112.125) 56(84) bytes of data. 2025-06-03 16:21:15.085631 | orchestrator | 64 bytes from 192.168.112.125: icmp_seq=1 ttl=63 time=10.7 ms 2025-06-03 16:21:16.078459 | orchestrator | 64 bytes from 192.168.112.125: icmp_seq=2 ttl=63 time=2.29 ms 2025-06-03 16:21:17.080067 | orchestrator | 64 bytes from 192.168.112.125: icmp_seq=3 ttl=63 time=1.81 ms 2025-06-03 16:21:17.080204 | orchestrator | 2025-06-03 16:21:17.080213 | orchestrator | --- 192.168.112.125 ping statistics --- 2025-06-03 16:21:17.080219 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-06-03 16:21:17.080225 | orchestrator | rtt min/avg/max/mdev = 1.811/4.944/10.733/4.097 ms 2025-06-03 16:21:17.080282 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-03 16:21:17.080290 | orchestrator | + ping -c3 192.168.112.164 2025-06-03 16:21:17.094336 | orchestrator | PING 192.168.112.164 (192.168.112.164) 56(84) bytes of data. 2025-06-03 16:21:17.094421 | orchestrator | 64 bytes from 192.168.112.164: icmp_seq=1 ttl=63 time=9.41 ms 2025-06-03 16:21:18.089225 | orchestrator | 64 bytes from 192.168.112.164: icmp_seq=2 ttl=63 time=2.41 ms 2025-06-03 16:21:19.091194 | orchestrator | 64 bytes from 192.168.112.164: icmp_seq=3 ttl=63 time=1.93 ms 2025-06-03 16:21:19.091294 | orchestrator | 2025-06-03 16:21:19.091310 | orchestrator | --- 192.168.112.164 ping statistics --- 2025-06-03 16:21:19.091323 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-06-03 16:21:19.091334 | orchestrator | rtt min/avg/max/mdev = 1.926/4.582/9.407/3.417 ms 2025-06-03 16:21:19.091379 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-03 16:21:19.091393 | orchestrator | + ping -c3 192.168.112.138 2025-06-03 16:21:19.104414 | orchestrator | PING 192.168.112.138 (192.168.112.138) 56(84) bytes of data. 2025-06-03 16:21:19.104485 | orchestrator | 64 bytes from 192.168.112.138: icmp_seq=1 ttl=63 time=8.93 ms 2025-06-03 16:21:20.099283 | orchestrator | 64 bytes from 192.168.112.138: icmp_seq=2 ttl=63 time=2.42 ms 2025-06-03 16:21:21.100846 | orchestrator | 64 bytes from 192.168.112.138: icmp_seq=3 ttl=63 time=2.04 ms 2025-06-03 16:21:21.100949 | orchestrator | 2025-06-03 16:21:21.100964 | orchestrator | --- 192.168.112.138 ping statistics --- 2025-06-03 16:21:21.100977 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-06-03 16:21:21.101009 | orchestrator | rtt min/avg/max/mdev = 2.042/4.463/8.933/3.164 ms 2025-06-03 16:21:21.101557 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-03 16:21:21.101584 | orchestrator | + ping -c3 192.168.112.181 2025-06-03 16:21:21.113276 | orchestrator | PING 192.168.112.181 (192.168.112.181) 56(84) bytes of data. 2025-06-03 16:21:21.113377 | orchestrator | 64 bytes from 192.168.112.181: icmp_seq=1 ttl=63 time=6.80 ms 2025-06-03 16:21:22.111879 | orchestrator | 64 bytes from 192.168.112.181: icmp_seq=2 ttl=63 time=3.12 ms 2025-06-03 16:21:23.114218 | orchestrator | 64 bytes from 192.168.112.181: icmp_seq=3 ttl=63 time=1.91 ms 2025-06-03 16:21:23.114320 | orchestrator | 2025-06-03 16:21:23.114335 | orchestrator | --- 192.168.112.181 ping statistics --- 2025-06-03 16:21:23.114349 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-06-03 16:21:23.114361 | orchestrator | rtt min/avg/max/mdev = 1.906/3.943/6.804/2.082 ms 2025-06-03 16:21:23.114407 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-03 16:21:23.114421 | orchestrator | + ping -c3 192.168.112.100 2025-06-03 16:21:23.122251 | orchestrator | PING 192.168.112.100 (192.168.112.100) 56(84) bytes of data. 2025-06-03 16:21:23.122331 | orchestrator | 64 bytes from 192.168.112.100: icmp_seq=1 ttl=63 time=6.13 ms 2025-06-03 16:21:24.120663 | orchestrator | 64 bytes from 192.168.112.100: icmp_seq=2 ttl=63 time=2.74 ms 2025-06-03 16:21:25.121949 | orchestrator | 64 bytes from 192.168.112.100: icmp_seq=3 ttl=63 time=1.85 ms 2025-06-03 16:21:25.122108 | orchestrator | 2025-06-03 16:21:25.122126 | orchestrator | --- 192.168.112.100 ping statistics --- 2025-06-03 16:21:25.122139 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-06-03 16:21:25.122150 | orchestrator | rtt min/avg/max/mdev = 1.851/3.570/6.125/1.842 ms 2025-06-03 16:21:25.331038 | orchestrator | ok: Runtime: 0:17:49.858775 2025-06-03 16:21:25.382633 | 2025-06-03 16:21:25.382823 | TASK [Run tempest] 2025-06-03 16:21:25.925602 | orchestrator | skipping: Conditional result was False 2025-06-03 16:21:25.944076 | 2025-06-03 16:21:25.944303 | TASK [Check prometheus alert status] 2025-06-03 16:21:26.484801 | orchestrator | skipping: Conditional result was False 2025-06-03 16:21:26.486404 | 2025-06-03 16:21:26.486505 | PLAY RECAP 2025-06-03 16:21:26.486570 | orchestrator | ok: 24 changed: 11 unreachable: 0 failed: 0 skipped: 5 rescued: 0 ignored: 0 2025-06-03 16:21:26.486595 | 2025-06-03 16:21:26.698756 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2025-06-03 16:21:26.699837 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-06-03 16:21:27.448810 | 2025-06-03 16:21:27.448985 | PLAY [Post output play] 2025-06-03 16:21:27.465575 | 2025-06-03 16:21:27.465725 | LOOP [stage-output : Register sources] 2025-06-03 16:21:27.546608 | 2025-06-03 16:21:27.546988 | TASK [stage-output : Check sudo] 2025-06-03 16:21:28.427029 | orchestrator | sudo: a password is required 2025-06-03 16:21:28.591172 | orchestrator | ok: Runtime: 0:00:00.065256 2025-06-03 16:21:28.606308 | 2025-06-03 16:21:28.606497 | LOOP [stage-output : Set source and destination for files and folders] 2025-06-03 16:21:28.649667 | 2025-06-03 16:21:28.650025 | TASK [stage-output : Build a list of source, dest dictionaries] 2025-06-03 16:21:28.732501 | orchestrator | ok 2025-06-03 16:21:28.742774 | 2025-06-03 16:21:28.742954 | LOOP [stage-output : Ensure target folders exist] 2025-06-03 16:21:29.198308 | orchestrator | ok: "docs" 2025-06-03 16:21:29.198745 | 2025-06-03 16:21:29.508791 | orchestrator | ok: "artifacts" 2025-06-03 16:21:29.746268 | orchestrator | ok: "logs" 2025-06-03 16:21:29.768908 | 2025-06-03 16:21:29.769103 | LOOP [stage-output : Copy files and folders to staging folder] 2025-06-03 16:21:29.810153 | 2025-06-03 16:21:29.810496 | TASK [stage-output : Make all log files readable] 2025-06-03 16:21:30.113807 | orchestrator | ok 2025-06-03 16:21:30.124066 | 2025-06-03 16:21:30.124209 | TASK [stage-output : Rename log files that match extensions_to_txt] 2025-06-03 16:21:30.159166 | orchestrator | skipping: Conditional result was False 2025-06-03 16:21:30.175985 | 2025-06-03 16:21:30.176137 | TASK [stage-output : Discover log files for compression] 2025-06-03 16:21:30.203223 | orchestrator | skipping: Conditional result was False 2025-06-03 16:21:30.214905 | 2025-06-03 16:21:30.215049 | LOOP [stage-output : Archive everything from logs] 2025-06-03 16:21:30.255590 | 2025-06-03 16:21:30.255746 | PLAY [Post cleanup play] 2025-06-03 16:21:30.263500 | 2025-06-03 16:21:30.263603 | TASK [Set cloud fact (Zuul deployment)] 2025-06-03 16:21:30.322411 | orchestrator | ok 2025-06-03 16:21:30.334676 | 2025-06-03 16:21:30.334801 | TASK [Set cloud fact (local deployment)] 2025-06-03 16:21:30.369364 | orchestrator | skipping: Conditional result was False 2025-06-03 16:21:30.385497 | 2025-06-03 16:21:30.385644 | TASK [Clean the cloud environment] 2025-06-03 16:21:31.480438 | orchestrator | 2025-06-03 16:21:31 - clean up servers 2025-06-03 16:21:32.200780 | orchestrator | 2025-06-03 16:21:32 - testbed-manager 2025-06-03 16:21:32.283410 | orchestrator | 2025-06-03 16:21:32 - testbed-node-1 2025-06-03 16:21:32.583092 | orchestrator | 2025-06-03 16:21:32 - testbed-node-0 2025-06-03 16:21:32.671782 | orchestrator | 2025-06-03 16:21:32 - testbed-node-2 2025-06-03 16:21:32.770317 | orchestrator | 2025-06-03 16:21:32 - testbed-node-3 2025-06-03 16:21:32.867264 | orchestrator | 2025-06-03 16:21:32 - testbed-node-4 2025-06-03 16:21:32.962817 | orchestrator | 2025-06-03 16:21:32 - testbed-node-5 2025-06-03 16:21:33.050451 | orchestrator | 2025-06-03 16:21:33 - clean up keypairs 2025-06-03 16:21:33.071049 | orchestrator | 2025-06-03 16:21:33 - testbed 2025-06-03 16:21:33.105781 | orchestrator | 2025-06-03 16:21:33 - wait for servers to be gone 2025-06-03 16:21:41.805571 | orchestrator | 2025-06-03 16:21:41 - clean up ports 2025-06-03 16:21:42.015500 | orchestrator | 2025-06-03 16:21:42 - 0f82f557-156a-4e50-882c-54446549ffd3 2025-06-03 16:21:42.493278 | orchestrator | 2025-06-03 16:21:42 - 32415d1d-67fb-48e2-b420-3bab920d9b0d 2025-06-03 16:21:42.736056 | orchestrator | 2025-06-03 16:21:42 - 5721f1e5-5961-4a31-80ae-729cd26239e0 2025-06-03 16:21:42.997292 | orchestrator | 2025-06-03 16:21:42 - 5f97ddf9-261a-482a-82db-2cf9abe8acb0 2025-06-03 16:21:43.217009 | orchestrator | 2025-06-03 16:21:43 - 603d5fc2-3688-4ac2-ad93-1c270d9837f1 2025-06-03 16:21:43.454306 | orchestrator | 2025-06-03 16:21:43 - 61569d3b-6c5d-4c4e-9520-ad4d2b6958f3 2025-06-03 16:21:43.719925 | orchestrator | 2025-06-03 16:21:43 - 7b8af06f-254f-4061-87ed-4c3956a5691e 2025-06-03 16:21:43.924965 | orchestrator | 2025-06-03 16:21:43 - clean up volumes 2025-06-03 16:21:44.047785 | orchestrator | 2025-06-03 16:21:44 - testbed-volume-4-node-base 2025-06-03 16:21:44.084394 | orchestrator | 2025-06-03 16:21:44 - testbed-volume-5-node-base 2025-06-03 16:21:44.124491 | orchestrator | 2025-06-03 16:21:44 - testbed-volume-0-node-base 2025-06-03 16:21:44.166665 | orchestrator | 2025-06-03 16:21:44 - testbed-volume-3-node-base 2025-06-03 16:21:44.207157 | orchestrator | 2025-06-03 16:21:44 - testbed-volume-1-node-base 2025-06-03 16:21:44.256037 | orchestrator | 2025-06-03 16:21:44 - testbed-volume-2-node-base 2025-06-03 16:21:44.299150 | orchestrator | 2025-06-03 16:21:44 - testbed-volume-manager-base 2025-06-03 16:21:44.343886 | orchestrator | 2025-06-03 16:21:44 - testbed-volume-5-node-5 2025-06-03 16:21:44.385942 | orchestrator | 2025-06-03 16:21:44 - testbed-volume-8-node-5 2025-06-03 16:21:44.433574 | orchestrator | 2025-06-03 16:21:44 - testbed-volume-2-node-5 2025-06-03 16:21:44.472680 | orchestrator | 2025-06-03 16:21:44 - testbed-volume-6-node-3 2025-06-03 16:21:44.514348 | orchestrator | 2025-06-03 16:21:44 - testbed-volume-4-node-4 2025-06-03 16:21:44.561181 | orchestrator | 2025-06-03 16:21:44 - testbed-volume-7-node-4 2025-06-03 16:21:44.607021 | orchestrator | 2025-06-03 16:21:44 - testbed-volume-3-node-3 2025-06-03 16:21:44.648775 | orchestrator | 2025-06-03 16:21:44 - testbed-volume-0-node-3 2025-06-03 16:21:44.693012 | orchestrator | 2025-06-03 16:21:44 - testbed-volume-1-node-4 2025-06-03 16:21:44.734332 | orchestrator | 2025-06-03 16:21:44 - disconnect routers 2025-06-03 16:21:44.864391 | orchestrator | 2025-06-03 16:21:44 - testbed 2025-06-03 16:21:46.316925 | orchestrator | 2025-06-03 16:21:46 - clean up subnets 2025-06-03 16:21:46.369453 | orchestrator | 2025-06-03 16:21:46 - subnet-testbed-management 2025-06-03 16:21:46.526421 | orchestrator | 2025-06-03 16:21:46 - clean up networks 2025-06-03 16:21:46.665920 | orchestrator | 2025-06-03 16:21:46 - net-testbed-management 2025-06-03 16:21:46.940724 | orchestrator | 2025-06-03 16:21:46 - clean up security groups 2025-06-03 16:21:46.982999 | orchestrator | 2025-06-03 16:21:46 - testbed-management 2025-06-03 16:21:47.110636 | orchestrator | 2025-06-03 16:21:47 - testbed-node 2025-06-03 16:21:47.216042 | orchestrator | 2025-06-03 16:21:47 - clean up floating ips 2025-06-03 16:21:47.248967 | orchestrator | 2025-06-03 16:21:47 - 81.163.193.16 2025-06-03 16:21:47.581831 | orchestrator | 2025-06-03 16:21:47 - clean up routers 2025-06-03 16:21:47.644466 | orchestrator | 2025-06-03 16:21:47 - testbed 2025-06-03 16:21:48.941226 | orchestrator | ok: Runtime: 0:00:18.085543 2025-06-03 16:21:48.945108 | 2025-06-03 16:21:48.945265 | PLAY RECAP 2025-06-03 16:21:48.945423 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2025-06-03 16:21:48.945485 | 2025-06-03 16:21:49.080773 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-06-03 16:21:49.083307 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-06-03 16:21:49.886775 | 2025-06-03 16:21:49.886960 | PLAY [Cleanup play] 2025-06-03 16:21:49.903282 | 2025-06-03 16:21:49.903467 | TASK [Set cloud fact (Zuul deployment)] 2025-06-03 16:21:49.956937 | orchestrator | ok 2025-06-03 16:21:49.964824 | 2025-06-03 16:21:49.964958 | TASK [Set cloud fact (local deployment)] 2025-06-03 16:21:49.999453 | orchestrator | skipping: Conditional result was False 2025-06-03 16:21:50.008551 | 2025-06-03 16:21:50.008731 | TASK [Clean the cloud environment] 2025-06-03 16:21:51.200486 | orchestrator | 2025-06-03 16:21:51 - clean up servers 2025-06-03 16:21:51.692134 | orchestrator | 2025-06-03 16:21:51 - clean up keypairs 2025-06-03 16:21:51.711097 | orchestrator | 2025-06-03 16:21:51 - wait for servers to be gone 2025-06-03 16:21:51.756054 | orchestrator | 2025-06-03 16:21:51 - clean up ports 2025-06-03 16:21:51.857484 | orchestrator | 2025-06-03 16:21:51 - clean up volumes 2025-06-03 16:21:51.923739 | orchestrator | 2025-06-03 16:21:51 - disconnect routers 2025-06-03 16:21:51.945603 | orchestrator | 2025-06-03 16:21:51 - clean up subnets 2025-06-03 16:21:51.963582 | orchestrator | 2025-06-03 16:21:51 - clean up networks 2025-06-03 16:21:52.123635 | orchestrator | 2025-06-03 16:21:52 - clean up security groups 2025-06-03 16:21:52.161450 | orchestrator | 2025-06-03 16:21:52 - clean up floating ips 2025-06-03 16:21:52.187634 | orchestrator | 2025-06-03 16:21:52 - clean up routers 2025-06-03 16:21:52.553082 | orchestrator | ok: Runtime: 0:00:01.405453 2025-06-03 16:21:52.556931 | 2025-06-03 16:21:52.557103 | PLAY RECAP 2025-06-03 16:21:52.557241 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2025-06-03 16:21:52.557313 | 2025-06-03 16:21:52.686121 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-06-03 16:21:52.688707 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-06-03 16:21:53.484916 | 2025-06-03 16:21:53.485085 | PLAY [Base post-fetch] 2025-06-03 16:21:53.500943 | 2025-06-03 16:21:53.501084 | TASK [fetch-output : Set log path for multiple nodes] 2025-06-03 16:21:53.556398 | orchestrator | skipping: Conditional result was False 2025-06-03 16:21:53.570035 | 2025-06-03 16:21:53.570235 | TASK [fetch-output : Set log path for single node] 2025-06-03 16:21:53.627930 | orchestrator | ok 2025-06-03 16:21:53.636758 | 2025-06-03 16:21:53.636897 | LOOP [fetch-output : Ensure local output dirs] 2025-06-03 16:21:54.114980 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/f4646c709e2e4f68ab8142ce5be2de26/work/logs" 2025-06-03 16:21:54.395600 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/f4646c709e2e4f68ab8142ce5be2de26/work/artifacts" 2025-06-03 16:21:54.683787 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/f4646c709e2e4f68ab8142ce5be2de26/work/docs" 2025-06-03 16:21:54.704097 | 2025-06-03 16:21:54.704255 | LOOP [fetch-output : Collect logs, artifacts and docs] 2025-06-03 16:21:55.683827 | orchestrator | changed: .d..t...... ./ 2025-06-03 16:21:55.684275 | orchestrator | changed: All items complete 2025-06-03 16:21:55.684360 | 2025-06-03 16:21:56.451844 | orchestrator | changed: .d..t...... ./ 2025-06-03 16:21:57.192231 | orchestrator | changed: .d..t...... ./ 2025-06-03 16:21:57.226168 | 2025-06-03 16:21:57.226494 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2025-06-03 16:21:57.264851 | orchestrator | skipping: Conditional result was False 2025-06-03 16:21:57.270893 | orchestrator | skipping: Conditional result was False 2025-06-03 16:21:57.288405 | 2025-06-03 16:21:57.288533 | PLAY RECAP 2025-06-03 16:21:57.288619 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2025-06-03 16:21:57.288664 | 2025-06-03 16:21:57.425007 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-06-03 16:21:57.427758 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-06-03 16:21:58.165457 | 2025-06-03 16:21:58.165631 | PLAY [Base post] 2025-06-03 16:21:58.180865 | 2025-06-03 16:21:58.181028 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2025-06-03 16:21:59.397309 | orchestrator | changed 2025-06-03 16:21:59.407504 | 2025-06-03 16:21:59.407630 | PLAY RECAP 2025-06-03 16:21:59.407711 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2025-06-03 16:21:59.407803 | 2025-06-03 16:21:59.526736 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-06-03 16:21:59.527787 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2025-06-03 16:22:00.371938 | 2025-06-03 16:22:00.372227 | PLAY [Base post-logs] 2025-06-03 16:22:00.383276 | 2025-06-03 16:22:00.383447 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2025-06-03 16:22:00.859905 | localhost | changed 2025-06-03 16:22:00.878484 | 2025-06-03 16:22:00.878681 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2025-06-03 16:22:00.927782 | localhost | ok 2025-06-03 16:22:00.934716 | 2025-06-03 16:22:00.934911 | TASK [Set zuul-log-path fact] 2025-06-03 16:22:00.952251 | localhost | ok 2025-06-03 16:22:00.964820 | 2025-06-03 16:22:00.964954 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-06-03 16:22:01.002460 | localhost | ok 2025-06-03 16:22:01.011799 | 2025-06-03 16:22:01.012006 | TASK [upload-logs : Create log directories] 2025-06-03 16:22:01.592894 | localhost | changed 2025-06-03 16:22:01.597885 | 2025-06-03 16:22:01.598042 | TASK [upload-logs : Ensure logs are readable before uploading] 2025-06-03 16:22:02.109925 | localhost -> localhost | ok: Runtime: 0:00:00.004138 2025-06-03 16:22:02.115781 | 2025-06-03 16:22:02.115913 | TASK [upload-logs : Upload logs to log server] 2025-06-03 16:22:02.717952 | localhost | Output suppressed because no_log was given 2025-06-03 16:22:02.719915 | 2025-06-03 16:22:02.720023 | LOOP [upload-logs : Compress console log and json output] 2025-06-03 16:22:02.775694 | localhost | skipping: Conditional result was False 2025-06-03 16:22:02.781004 | localhost | skipping: Conditional result was False 2025-06-03 16:22:02.791932 | 2025-06-03 16:22:02.792094 | LOOP [upload-logs : Upload compressed console log and json output] 2025-06-03 16:22:02.849188 | localhost | skipping: Conditional result was False 2025-06-03 16:22:02.849532 | 2025-06-03 16:22:02.855506 | localhost | skipping: Conditional result was False 2025-06-03 16:22:02.862814 | 2025-06-03 16:22:02.862938 | LOOP [upload-logs : Upload console log and json output]